From patchwork Wed Feb 19 05:04:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981475 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2050.outbound.protection.outlook.com [40.107.236.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A1911B3953; Wed, 19 Feb 2025 05:05:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.50 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941523; cv=fail; b=fuRvv5HH7jtAJqg0zJCSWbrY8GT8PnmFUmCBYXQg/btjh9U1oszF+xoISwsC03U+bxl+lVz1BaVFCKAirTRlEI+82w6h3PuprkQ6MlScFXOlIhYIlPqEhh3nfvSZlTJsuihIIAsloT1aZXc6R9d1W/00nkzhvDBlOLgBE3EvJhU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941523; c=relaxed/simple; bh=AX6FR2rU/D3SNZ9iw06OqyPpjihnlRsUq8fvGYv+als=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=P0yI8KiUe1EXKsfmZtiKNCZGL3W1kNEARWUeypUWzEYFDdnhGWI7a8c1pGV84CcP+WfVJYxBOf00jQnTeUUFn6TVA3TK5AXI6I+n4x6RGfM+WDghWSwZ5axiBlVowYwd32h0/6K2mf7jPekeXKHPefWzCSd2QA3AhiLE/VxddiU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=C04A9M8b; arc=fail smtp.client-ip=40.107.236.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="C04A9M8b" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=BTwJGql/erpFH87NyXma5/GX1dUMgkVpjD1jnFyyDGFS6rRm2X0999MlideHlWNp5ilTtlv1kfrxve7JiKvLWlHyLAI1DhmQ7ot3tdPqmBlBOrZVYwbUN/692CfuG4a3nlcxgfwzwJzqFFfdWte6y7TFv+tZ0D1/E9PyDXkzY4gM6sUukbLL+HrX9E0ZNVDWhzO10jEfYaYK+Oeof2kfoQ/u/P3pskyWM9uwzINS/8qvp8SnzVcjIyWK+I6YsMYAIcLtaHEq7CJOh2G8jTL5WjQ+eSwfOEgwIjgvukgWyRpzHgHiqxPg1vIJaxTRM+Cs6HAmjASHWtIp3Yz/M7XbDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=b6kKPGTMdoDN9ubKiRbTsfXxTkV3fH76JCjE7nf1IvQ=; b=PM7WLnM+CSLOzuFMTeUnQpTmyPY40Lo+PSptZ1NhJCHKquYqlpkIvAEri+10gHnImlpTXW5ZeJ28LOWFBtgU/im1WYu71x4XKD/+flu3QttRNKjGRI1JucRH2BPdS/VDA6xEsT7Fr2iTco0iizSNTW+iQWqIvAJ821LxATykG/6OWGvqtXBfJGLifowdSmI5eiRE3A4ewisNW5yDfqkQlBM4dF6FybZYupsYuIs0RzbSp/OHyD6DlsWjfLZedrIbp35y/FtIw8mqwG8euwkJsn2G+OCPmMUcYoRr9ZETa0fdWAtN9CwkJLIrrgA+q7wsDK7D0Y5VVpkCltdQ1/DHgg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=b6kKPGTMdoDN9ubKiRbTsfXxTkV3fH76JCjE7nf1IvQ=; b=C04A9M8bPcyZaR7KdjNDr9dhGRi1UxjMfIkRN4SFY/8y+Mo8A8M+AXzHGFcd3xPfHWjZyz2qnlSiCk2uc1rKbrXcZCRBZacl2JJTIid2a0xK0ZJXPQqA6jjiL9j53budJWwGs0tluHNNGoI21XNv7jm144iRJe7OQ+wof37p5KKyJVS4CFgNS/2YqBQExQIWB4Amsw3uYekbJYg+oxD7tNyQgxZQZ2e+sVvGtVsFozBRki/MRbEoks89dORKz5BJ5/Rx6eUw4Bi1z2B7JF7xd/r/TmyVy6MsUM36EOEajxEIcvhBAUDdafOwPlwNMM20hHufTZEru5IyQMOIwhWZkw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:05:17 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:05:17 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 01/12] mm: Remove PFN_MAP, PFN_SG_CHAIN and PFN_SG_LAST Date: Wed, 19 Feb 2025 16:04:45 +1100 Message-ID: <5b91f54d5e608e0ba4555e6d107a58a9d7f7e2ad.1739941374.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SY5P282CA0110.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:20b::15) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: 959f831e-1f39-4119-92ca-08dd50a30079 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: Ao8sVwtODJHL1XCoKXcVQh7wqXfWjaVSuAitGENHbnpUrgtPtTWljOGbrF0fC7R0Monu18oGFJgnR4mIaGrRUz67GPhZPEwrrIu/0df8G5Kxrgkuz5z2ye1107+DwxSeQUWey397/42Hdg7CfwvRcUA93z/tKFXr9N+1HUYjUuppmDBLR8qkhFjcfaIBQvURLnbBwD5mbmLPV5PJnIYyWjHmmLdAlA3TFqc0pZd5DcWlawK/VRRYKwfoQOBWxWkwFoWoRnQrbguKZI549CTiRPyj8UvcGn7r7gccV5yNOLqsTCoXd4oH2/Fblifw5k5556dWMVhhv/LVcAFrwOp3yPVMS3RdeJhboev1BgdHhJE3s5+R/u+Ijwtyiu5HvaBYzlBTOPoQADkde8h0wSZMuWgjPv6KtZcUuoUswc9mBopjXJh84sV/XQmatJEsBWEk1GeUaOXF+MEq75CbzxvXNKUwpSZLZyrsEbPUhuazddfPThdfHoUj2GIisOIaD3dTwNUDTNlKNspEcyK8qgI9lZyoaMhjjy67GQfJN0N210i2gueVi3ZIGK56mqgLEbllBbMiNyzKjzrjKXJXISNjHYVaA7EkXsdkHBBNDM1rWmBvC8e/wsvdUmuo3rR28m+5JCZWLAyTyQ1fl1B3pmKCE0w9IcoZarsVQilKZ1G91TOZlnLc8Vvjhf8rCPORFHQqIa258K30+JYUrjEY2M+OpOHZakED751rNDSEeRGwcMiOHaznIjHvoqNy7dPS49fP28r+kEfmlMC8eUp1W6jjicmMVa4jTeRRe3qgVCgpBOA7rIHlKiqgL9VRYRXf3VpjImwkY326usG1We60+YyIFM5OzFaXP9JP/UMD30KIrXk20D9eKuL+IR1zVxLIk7+Uh0d1D2ORAiE85yyoJWOnN4bL+Lu4O5QKWf5aGF3iUYyDzCNLywsCH2M95u883A1jaCm+PZD8a2nxkqdB6zmUbqdrwQnKAmF8YF4UZiSHH1ZxFLqoK1bhg6BINNeWI4aZ7+V6TWudJUpRKzVTrhfJl/+d+gbmiKYU7CoRFXxq/zgv97RZJoYAEOkVi3mu1HMvvnOiwXOb1TmCdUhR8ql9ngDwvXbNN+2GqC3dZuLToNvxArhHjTYjZm8Higfl6OOl2+cgbSnOA0s9v37rRrX2aTfj+JnOktzYtFcN8etRC20CvhOGYPgnrD/PULeQ3jiDqBr2dMS2msTxQhT/LKPJMAE+Vrfv6A0RMfk40vp8HLm7TjgkTr4SkQo5LcwqFTVDskR9KE0kYSTXVKyTtR29phKgyh/PkTNLEwj59WhvoZGa/7X4mWRlV/Cxihuap04jZwOshSh7fOPYzby8/YUKV0VjgQaqTyXEr1Cmyn5lbxvlXwTbRjWsnQ6jDpPmkif2 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: yqMKoGmK1dRYg50lE0tLoLsUFc5abEH4tXB0uXxqDdEXv1gLm9XWXcZyPTpve0knXOJ/NlCUTAWrMKFTPHbk+ODoj439WgYu3NanXz8RwU16eLlSYrelaumL0aPW8r0LmD8N/oF6AAM4YBVwOk3kOx7SrPEki9/4YdRat8tRWlbCEE5ZL06PUv3P6bhmckrWRXDAXDdNvVCLgLcZJKOTCH/WBvVUzmNCY46a2vTUDZcD5pePHh98WMuwO4YV/h/KyztANxTR4zwvGZcDPTQ+mqWLa+UM5k3dPJrh3twGxFLWeldI/twb1fActijLUq/LLIP1fcRGN5wRzIme5f0RZrKD5IIxUuUc56W2l+2qAniYIL0m29/LMxDISn4NuHAQTHnFk93IlaEh6TgEyOICqfnrbhsalOhBNdC6VPiYKcK0m1DOSwIU1a48jzbIkEkeTgpdkgT6vasXtLYhLOAFonJGhulhBfw+mYTC1nDvrDpdu9BcsPa4il97lIELPhFOAlFA33uq7hJvrPX5zOzog+sV9rhsmd+kR6ty0G+71y2Ea1hmQiTuRuNb7fi4cRLRGPGWdjL5mbogW1s2o/Az0Gvkk+GrpzqDtuKOFq3ohaWh6IKvM5Sl/cMQ6GgWRKXrGAQFmzlowfaFM6Q8J+UzSsBPOWEfGMHkgtaGTxxOg/uK07Bkvd3rwjZ0UzM6uqhAjjy8nUZ0udDC6swqe44UBFuk5qw4vJiuubBGK0LcvUikXs3zWJUMUMAXHx+PYuWbL8J+ox89NJNYUYTX8OES7NypPALJk2eztbQIXcCyezrhDiV3iGGZRoWEKGpqos9CkMiQYqA6Jza/IvXYY0iud3OSGKFUWTDPdtQDdoO3SkO+YjC6EyhLSsqoax/MTzCYf1NqO0rmO9KDBmipiQA0BW77CNKmxKxvU5f5TNqoK6DM2jM0F5u+ToKpyRF5gw2STZ2v61iLgIvA6ohKUS2SY9pdNw1KjUhYqAOyNjX9GmCBuuaubnmB3JVp3bo+Y6wqOgH1/Ek11vEiLtBmH8jIYnbUK2KJEobc3b62HztaLMnIW/k2MjA0uvZI1b7SLfVQWQdKHY6bXxsdGbJcXZxVDVWcwKGTB9vFPmnIAm1JAmZgTh8G07feD2r4aZoSTy0VQdjTbTAbEjXIOO2dJyRRHr/zrgjos34bRz5te6q0eU+bvfJaK7wIc5t9f8xN01tNxzuKH4v3RwsJxt9RQOPsvr/6rtCJxHr3iAikx+b4u7dynVBgvKPYz+mvb7geC+5j6ABT7cPqNfWYRsHQGvZgHE7b18DyHh1CmHv618BVqNZ577CLB9JS5V9SjHJxhsOV3k8kaf99NhYUcIItqsJmyOjJhjCs7jmIB7EXnJ7+OB+IPyT9RdQhIhtfPRq4WOs13tiA0Wo57IKita9UVJ1E9BTiIFqhA5G7nc6CNpoPzESR23tS6x3MJaA/4ySq6UlZl+8jRgidMkggd4G/WdApv3txs0fnHrAKqJ7xHnl7HL5ac8PUfzkRpH9fV6ZA0ckUCYCIkPYQBZtNsReeEXXt5qBkaGvE9shr4F7Vjv8tIiOSE0TC4dkyvYR3TuYJwFp0 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 959f831e-1f39-4119-92ca-08dd50a30079 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:05:17.7570 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: RqF6EMXXSa4CJFg17zpqBX2EOA/eMM/QiGO9zvrI9tC1MZm8BfTRyP0zM/48xgXAHzwRDwxjd2S9du6xOZkfPg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 The PFN_MAP flag is no longer used for anything, so remove it. The PFN_SG_CHAIN and PFN_SG_LAST flags never appear to have been used so also remove them. Signed-off-by: Alistair Popple Reviewed-by: Christoph Hellwig --- include/linux/pfn_t.h | 31 +++---------------------------- mm/memory.c | 2 -- tools/testing/nvdimm/test/iomap.c | 4 ---- 3 files changed, 3 insertions(+), 34 deletions(-) diff --git a/include/linux/pfn_t.h b/include/linux/pfn_t.h index 2d91482..46afa12 100644 --- a/include/linux/pfn_t.h +++ b/include/linux/pfn_t.h @@ -5,26 +5,13 @@ /* * PFN_FLAGS_MASK - mask of all the possible valid pfn_t flags - * PFN_SG_CHAIN - pfn is a pointer to the next scatterlist entry - * PFN_SG_LAST - pfn references a page and is the last scatterlist entry * PFN_DEV - pfn is not covered by system memmap by default - * PFN_MAP - pfn has a dynamic page mapping established by a device driver - * PFN_SPECIAL - for CONFIG_FS_DAX_LIMITED builds to allow XIP, but not - * get_user_pages */ #define PFN_FLAGS_MASK (((u64) (~PAGE_MASK)) << (BITS_PER_LONG_LONG - PAGE_SHIFT)) -#define PFN_SG_CHAIN (1ULL << (BITS_PER_LONG_LONG - 1)) -#define PFN_SG_LAST (1ULL << (BITS_PER_LONG_LONG - 2)) #define PFN_DEV (1ULL << (BITS_PER_LONG_LONG - 3)) -#define PFN_MAP (1ULL << (BITS_PER_LONG_LONG - 4)) -#define PFN_SPECIAL (1ULL << (BITS_PER_LONG_LONG - 5)) #define PFN_FLAGS_TRACE \ - { PFN_SPECIAL, "SPECIAL" }, \ - { PFN_SG_CHAIN, "SG_CHAIN" }, \ - { PFN_SG_LAST, "SG_LAST" }, \ - { PFN_DEV, "DEV" }, \ - { PFN_MAP, "MAP" } + { PFN_DEV, "DEV" } static inline pfn_t __pfn_to_pfn_t(unsigned long pfn, u64 flags) { @@ -46,7 +33,7 @@ static inline pfn_t phys_to_pfn_t(phys_addr_t addr, u64 flags) static inline bool pfn_t_has_page(pfn_t pfn) { - return (pfn.val & PFN_MAP) == PFN_MAP || (pfn.val & PFN_DEV) == 0; + return (pfn.val & PFN_DEV) == 0; } static inline unsigned long pfn_t_to_pfn(pfn_t pfn) @@ -100,7 +87,7 @@ static inline pud_t pfn_t_pud(pfn_t pfn, pgprot_t pgprot) #ifdef CONFIG_ARCH_HAS_PTE_DEVMAP static inline bool pfn_t_devmap(pfn_t pfn) { - const u64 flags = PFN_DEV|PFN_MAP; + const u64 flags = PFN_DEV; return (pfn.val & flags) == flags; } @@ -116,16 +103,4 @@ pmd_t pmd_mkdevmap(pmd_t pmd); pud_t pud_mkdevmap(pud_t pud); #endif #endif /* CONFIG_ARCH_HAS_PTE_DEVMAP */ - -#ifdef CONFIG_ARCH_HAS_PTE_SPECIAL -static inline bool pfn_t_special(pfn_t pfn) -{ - return (pfn.val & PFN_SPECIAL) == PFN_SPECIAL; -} -#else -static inline bool pfn_t_special(pfn_t pfn) -{ - return false; -} -#endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ #endif /* _LINUX_PFN_T_H_ */ diff --git a/mm/memory.c b/mm/memory.c index 1e4424a..bdc8dce 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2570,8 +2570,6 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn, bool mkwrite) return true; if (pfn_t_devmap(pfn)) return true; - if (pfn_t_special(pfn)) - return true; if (is_zero_pfn(pfn_t_to_pfn(pfn))) return true; return false; diff --git a/tools/testing/nvdimm/test/iomap.c b/tools/testing/nvdimm/test/iomap.c index e431372..ddceb04 100644 --- a/tools/testing/nvdimm/test/iomap.c +++ b/tools/testing/nvdimm/test/iomap.c @@ -137,10 +137,6 @@ EXPORT_SYMBOL_GPL(__wrap_devm_memremap_pages); pfn_t __wrap_phys_to_pfn_t(phys_addr_t addr, unsigned long flags) { - struct nfit_test_resource *nfit_res = get_nfit_res(addr); - - if (nfit_res) - flags &= ~PFN_MAP; return phys_to_pfn_t(addr, flags); } EXPORT_SYMBOL(__wrap_phys_to_pfn_t); From patchwork Wed Feb 19 05:04:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981476 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2089.outbound.protection.outlook.com [40.107.236.89]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6CF591B81DC; Wed, 19 Feb 2025 05:05:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.89 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941527; cv=fail; b=RfyHgVqI7ONbV8NZk0yLJyH58nIf2sNz4zwn0CAip00FHSiFNoAlj0mC5sQcgWqZENrMHKlhXkFtJGkRZRiWNu6cqNuA250MKtuKDDiTJrud8TPygDrCQHz5ojaZmRx5eBFcnCHy/BaGv03THzMkzZ/+SL0dvTZyAECJE+HxDX0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941527; c=relaxed/simple; bh=0FZDUFf+5BBUfMF/7KR+Fj+xoNTpcXSnwZ7a3a/0540=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=brWfIrXQa8x6tX8BD0KE0hU66qPBMT+h0RQrxClPvmDpn1lXvnME3LRCF2wNvWqIpJN83tWLvuO7hVUgM9B1nB+2i2blk4PdByDqiEBu+u3AbbdF537KY0fdM0LDgHR67hO+Ng8t69kCWUxMZ14yIOzIy5V3/89zdiH8AeNMPr8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=pHtg2bDu; arc=fail smtp.client-ip=40.107.236.89 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="pHtg2bDu" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=oBmX2RNAdKn7A1vTEVjhUEdlofxloMmkR7d4O6lp0XcwMEPzSlmUAiAS9LACPWPpQTy0C3DS20QralAUp3bh5M5Eku4ZYAlJsIXPo+u7ksLYlYZwuwLFaY01DLCqJ9fSBhexkeA9wsWxzDChDHNVJIU+p1xoWY9bL/ogUImscZWc1kKT9Z5oPUP8JRjsmRkB0wFm4qYPc4uokAHP57BoTjd9ay2ayWAw6AnqS+omOeVqnURu404/OfcHr0Mi41+OVdoNjCShAG0Mtt8v2WMWOXIMlrZ14aUFT57gENCE6KJzT4xDV5W/XSq5IhCIIaoWmdmSuzFctuM1Gbpr/vIC3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+a4+Q+E73o+RYJln5DCkDPNlg1ReAJnfPq8rXKt2KGU=; b=ANH2SCuBYnC5lYRye3AjdrtV8/9nQ4Hiyfw+mtfIrGYb4ZV0jdnBONAZbnSTqy1/p60JBtOFqGGZi1elv9f7mWdYsTrsEa9PB5dnwq+wxERpnJEyEPN+mJFQ3I09NJnwrIivhTOyA3cigh2ZWvmF7OILZNK/18GMGsxB5UIgaDK5uWtatljCIHPe4AAFp2UvnYQm5gSLFZwZIHTWTC+frLte0mM6lKrvW+PYvyavLJkiNcWKAYip/II0/5NCL6eqPgJOmOMibanOEBU3Pp0KP0lj031sJq57j+jvwFj6UfYhJZpEANH3Vbk0lX16D1fnLITGa7kwgVSXwdBy5zEYiA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+a4+Q+E73o+RYJln5DCkDPNlg1ReAJnfPq8rXKt2KGU=; b=pHtg2bDu3k47QNjJNaluwUV4Ij0dWpU6hpwmFVgYO2Rh6T4/4a9ioDGiRw+/7PpmeU8zsqTTUjvEHvjtLSNX5w0rlIf4uB2HEkKchlzDlQ+FcZV56BPCJIR9SbfQkLkuCiABZNq3newCgBvXSYeCzJxUViPd+CKbZ62fK6fsyZ6Hu5b72rC+yxniPTb/LJrNUUBHt1mYkHSKDqI9j9O5iedpzu4ThfJToPC4E1T/a35laYLaS2JPv+7JCdVgxuJV9H1tMfITECTWU+mC6W1+YZBa1/y5QS/4F2Q1nqjMBjmveWAS/bB0eF71yiPmZioYxSfgRE03AzLmeZK3kLLCoQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:05:23 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:05:23 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 02/12] mm: Convert pXd_devmap checks to vma_is_dax Date: Wed, 19 Feb 2025 16:04:46 +1100 Message-ID: <5142b971de0a9608147c003953781b34aa6a3a45.1739941374.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SY5PR01CA0124.ausprd01.prod.outlook.com (2603:10c6:10:246::13) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: 15fae45d-7231-4b5c-105d-08dd50a303b9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: xPYWe7KyHyhiBTUka8EtIcE8FSZoghjdreHziIReLCzguOE23rdAAhmGb5859iOeJ4xwl0Yx4bhOGVZU2gEYn5VfGraWprrh9J+zrA89fZ+pEo1TzkmSjs5zsgiVMxArkTbOzoXNy1bqxwpzgNIlaSThdIZ4axKCFChC/i+YZnEp+pWJP0GpOmDL8C2s2zYLp6UpxHuOK7h/bGJ9ArVQ7lZr4Hm4xTnu7zbqTJhdaC/Lyume/nNieZElkq+ORGQstO8zJ0QDdePZiVhABwfIXt1pYlugt92HL4VEIhRhaTkq2LV0BmAZWIJ08ERWNQ/FUfYh4e0t6W4Bsx4LAYTLJm5yIJRNO5/CeD5lrBBWkfPuDM0gFXNR7DbERGfqYyH4/aLKxLtN2vqzbPnlkJ8NUirhNBfYjwntFLq/36E7CMYZ4neckn4rSCb+SAaa916WFobLxc07DCz0Uq+Dm3U+vgYZP+I1CmM3R12E5LXpeORZMauw1iu+fKIOoVDAFWcBb38fW6UZ0ocZm+FxmILN4uuPi05XgFv2X74lCzDdre+iHQfowyGvOduBqaN0qldFb0Ve3XuPsEhEH8sMbSk5KaHGQHI/YkaFSfd0L7gFPk0soaE/S8RIjr4a/Y2+0paj9jCPpVnSiAcbsXc54KyDTU+I8FaxDtwN+PgDoYoFX2uXZi4opCAx4SpsME3jIvxPvm/LPsFcxF8HhGdQJY3ogHHTB1lW8+eQ0gPoNBS0/2IZmc0U3onxcKnbkUULUbXGP/m26ZMmxuvvg25w75867zBfXcqDjZDONvojoQ+TiCQDyoHE372H+INHuXU7Ed+k0x2HM3qmnIu6ovjsrlefg31h8MvcOzpIgAsuwbB6aXsRvUmJd7BRdj0eONYE9dRcB1qAX7kca98mxkZxMzA0Dk91zkxAl+Qu7sO/aIHegxKkOpzLb9jUuQb1oWrN1qHv64Ve8cW6j+T73vldLlTNHJW7HWgtP4GcVRe6uOSKJ3iC2srvMXSceuocSid+AYWC01XqLtRod6/CSAQEzF1MGj5SIsmaLiIgofpC1liD1Hm0cO3RkTia5HHuLkPD/YRtUc522r9JBo0PN6p2mx20g476aVZyQsZQ/1aX5WotwJZPtbbLZlHI3g35Lk03YjluQUTok8210laRvE5V4P2D/6r+yi3MqyZBKoeJX7YnFq/tPt61UoxmaK8S+ehZFirCjIWWRFAMtK7EuMI/Jc9LXSM/2bB0Nh5VGRxT45qGq+xjas+Bk7PvrkPfS2udU6788SBWIGZqyWueNmRczmNWck/3oVADfqSuELHWouGJD95P7uJbLlDxCXqHCcIQrk+15kFrpBrdRn/rq2FY6xSbx99NqqsY8cJTcYoayTuAI8SNsrOeRed3HaahB4M/qNBK X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: DkfQHOHxCrFvsy+Uq9n012i+7NyYdVW4M9bTqyCzMFNajuLfhGTcuCiV+gTloViBTJcAapRgFsNDIbfxONIBeIR/zooxN1Uhlai6CC5j1jGtXfdzN4WhBorPP0Yaj7HCYAbo0bXOeuBAJpMs5Su+BTVIKZY2Qn0C6f2sISOkE4EJwxBNrleb7Q7604nJ1rCWUD3MlOgMUZn+7fXonHYuVFejACDMsiBWmyEaiBZOLvYGII++5qLxJb7CzGeyxU2U3s8y4oLg1EWTDYm9wCmapVWBYeL2TLG4XuNryAImr3ky6RTW2OwwThbjUTQ/KGgVwXdxIUfbkjvuQHNQ1ZWZmh9T4Kt+I+BIpWCxKUBT3YzXMYEaf0Smd0NamrbhUINfadX+dOVSavf7bD69VtMJf0+r81XyO1zZjdAzQG2YZeQVCin2H19/I1AI32gjv0xgtLA7ZqvrnbrKGE7ci7N3zP16BIHrYurWlTTUkZmi5d/G8yCCI1A/UEbVkWgA1P4Cvm0X+2lZCwaLxmJYpcWyRdCRHHcmDJt59/7Z1PuybSnmMXEV/zYUVlV13h3WHwGxm6i1jvozTvzBsVBs8Ix4AvOFPxSwRVhpJ3sTPa5ldxHp1T4UPZXE9u8muiwLtlDsl14TSjMi8iwtAh11zuI31up9Pwyh03Ww3CMNM3tqFh9nVrhsvC73/qrQj/yLI61d9p/v4oJHxTm1mupnPhCfkgVi7yg7to4CSft4Pw1nXpQFscpiL2Srjy7i47v6ZBwBU4dx+KZb04/uVb7kOBEIamD/jfOZLkZGtlabzbn2BGS42ruYMxQgNdsZ0VaEpQSMLyuv+ha1S4gCJAXAHhcvxzTMlJ24n9pHRvvofD0vmcVDFL1O7aZPyFsPiLuFnUEjs3kq5tcLWk5S/ZuGvM9mmICyyonwYyZIjWU6GBW50sA8wMkD0YDoJ1p2vGb651ppqufGYKFlqLV3YfZhAvvk3B08rv6BDfjODCit38I4aE9I+TWRuHDtc1WAaXgp1CYW0Nkx9aRjbxDqW2X7L+ryEh1xFpCz/rolFvLM1maazN8rpfCAIdrOcrrrX4hE3I0GNK4o+e14ZG+9aGnMp14Mu9c/Wvv9xUT9hXL9zTpPWOpPK0kypOVNzYgDndFCA46ZbL6Ixq73GoujCbNFLBPYapgf6Ll10nNAKZ4iKK1OAjVLyVj/NppYRCJEso1AFcAM1yutVsllntscpyTWfX35JsbTCN8YSiyXx011njL8OQVy1FVY/Bdas7Yr/j1782zlMsdBNWaHwJKqk/QiQXJcbIxOpxhQlClZ7TpoSFHajLf79Ur4yI3uc6+9WXT3JNajVLLWlc5bRTfz8e6EXS9kJEGXCZFlk2agpiStiUv1PVgQOHBjx6tG+tZPuQu3bHWScRisVrCzEIf9O0+ENg566pK7sYNxUig4A+3jOsqmJL2TdyAS0SazdrjgfXyes8K8p2wW09HMeqYohRFzZDR82ssP/FYleOgJWNHImQrFYhReQcJYPNdvj3CINTOk9NNAm/pE95aVy4jLdBQatwJQAviRtGB+hVvWYIeVUGr0SYGs9+/1NbCbFLp/EdlQqv+q X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 15fae45d-7231-4b5c-105d-08dd50a303b9 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:05:23.0232 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: hNxeUYW7C5qJ0KDUB+KtF1eTDOeTHDeWf6iVlPGTi2vi6Rc399QgHv1R47ubcwjn+VMqZ/2amahODbl1+Z/j3g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 Currently dax is the only user of pmd and pud mapped ZONE_DEVICE pages. Therefore page walkers that want to exclude DAX pages can check pmd_devmap or pud_devmap. However soon dax will no longer set PFN_DEV, meaning dax pages are mapped as normal pages. Ensure page walkers that currently use pXd_devmap to skip DAX pages continue to do so by adding explicit checks of the VMA instead. Signed-off-by: Alistair Popple --- fs/userfaultfd.c | 2 +- mm/hmm.c | 2 +- mm/userfaultfd.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 97c4d71..27e3ec0 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -304,7 +304,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, goto out; ret = false; - if (!pmd_present(_pmd) || pmd_devmap(_pmd)) + if (!pmd_present(_pmd) || vma_is_dax(vmf->vma)) goto out; if (pmd_trans_huge(_pmd)) { diff --git a/mm/hmm.c b/mm/hmm.c index 082f7b7..db12c0a 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -429,7 +429,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, return hmm_vma_walk_hole(start, end, -1, walk); } - if (pud_leaf(pud) && pud_devmap(pud)) { + if (pud_leaf(pud) && vma_is_dax(walk->vma)) { unsigned long i, npages, pfn; unsigned int required_fault; unsigned long *hmm_pfns; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 867898c..cc6dc18 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1710,7 +1710,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, ptl = pmd_trans_huge_lock(src_pmd, src_vma); if (ptl) { - if (pmd_devmap(*src_pmd)) { + if (vma_is_dax(src_vma)) { spin_unlock(ptl); err = -ENOENT; break; From patchwork Wed Feb 19 05:04:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981492 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2043.outbound.protection.outlook.com [40.107.236.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1283C1BBBF4; Wed, 19 Feb 2025 05:05:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.43 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941532; cv=fail; b=aLgpdCiH49zp8Pvk9KXd4PQvDW5Fjf/nIPMfPi5kC1ZEhI+QboRfGzLfIlpeJ3rl1oPr4AMv1kdICVRnYSgAWYV6iJhr52HdOgDGUXxuRVEvsz4AVZNl1w4Uej1g3tZslRUNolPY1FS/gEkZhWmthRFDq6eSHIxnJWoa66W9i6E= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941532; c=relaxed/simple; bh=608QT2evZ5AKdkwoQrasWrD+oJVsedcSR5PqTkqVv8I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=lG3BSzBMIxJZF02t+Ta3ulmUnnmiT3V3K/2ipYpEvBSanrFsYiRQYFt+FcsRcnS2mZA53fXaChvHrr4wIjxBAbn3bsMfI1/4oKA3MBiswIRhean4V5cM7DnlqvPR4NKo+5fiH/imXVqPAkaqbi65F5Nr47WjFC3dwNclOT6Eb1Y= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=BFvuKOti; arc=fail smtp.client-ip=40.107.236.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="BFvuKOti" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Qbat78+7kqns4XvkYIPJ7G0N5D4vW6OTuhPrD1HGkAwSxx7d2My0zWuM9N0rqg00yaOCU4Sok3xBeCJnSNnjvJRm3s3VVHKc7GfZ7dSdW9MUR89q0bcBs1L+rfH1ZzsbV8p3h6OmSXInIf7es4gscRRjKZkNq1MBxjxRuXHTygA5+pLooI6bfOLEnMBuSCRLyzkvh0glL7zbWL5yOVxRaePTAWMVa5m84yQwxqJZ9yPzC8gH+iiCTWh2MhKO3htKR2xY6Y7B42YSTh724bFL2gqKBj1TKhwVZ93jDVmsB6z8NNRJyZ9QkjyDtDd+afUq3WPMWXOuGWFSlctLlq2KCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ewy3qHMejZflwjpE5dVdFIabDHqLfFEDCxChc8JRT4s=; b=utE3yar35iTJwL2vCZmIhT5ByTYvnz34wDJw+9KDb+9akmI4UrTBxT1KhOpW6rY0iPofNfERXdDTZ4eVq2SwTMq94C800elmNSwZ3iPMBnLH9rBUf4DnPsOTDK6lwGwrTTKfc+ZFatgmpzqxhl/YRXbiw0YUUgQCk/2p8G8f6fEhr9Jc+hlXUV7ebHIdxV6J+TB3Dl4+X45qQRrgK3sG2xtaBITYR5nLlninqOaRMZraf5sfxMhgxqnpYk/h3/NQ8Ayhg2RLiRr5OTPHWW8Nykp+Q5+6M2ORMybpY0teFB2fJLpTfjXzluB2oEVJkpKROEpbWaeTurV62/Hw+AmHiA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ewy3qHMejZflwjpE5dVdFIabDHqLfFEDCxChc8JRT4s=; b=BFvuKOti7kTA8hzkCBhTo7wT88dncrTo1gssc0HrFbzG/BtMDYm4liFxJjBoXnGbFOJZOqAuDM2k/7DX5Wa3p8ZtI54RQFwCmUmf6T+mV7u3vCvV74ztHmCoo6xzD0sGWyYxxRuTb4TVeYEKTQPVRV2ssHsT8i8ATMc0L0w/FnuqsLCYZn5vf4Td9Iyk9UJmEnUIZt+baETABYfGvI3s6+p64GSleCEmf3JI+ibhnwBLQuOLCjICUP3XPZRQzMkVCKzHfxLocl67ga0vFK/IJZMcMCzMIHgJWti9+p0pRLpFK9FJQ24dRNh76bpQZFwasWxTaZJMauPG3d8UnQYoYg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:05:28 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:05:28 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 03/12] mm/pagewalk: Skip dax pages in pagewalk Date: Wed, 19 Feb 2025 16:04:47 +1100 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0108.ausprd01.prod.outlook.com (2603:10c6:10:1::24) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: f808073b-e029-4f84-44a0-08dd50a30705 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: eogx4Wun2ayRUYnmbM/qUgUJz6OLcy6a1Wo/UF7e8a3yxw1E9Izv5kUdcz9wbRVbh+BmHV24HYzkiK1v7W0/B3Paxk3cdWEahKv06aM6rPZ4BqBl7c0A8rP9KIx3q0UWeLLwzgpq86eEMpvR/I0QcquXGOULi+hnVOzNVceE51hTUlvNMsynsQ9Jmbxll5Ci+fZRwMNLwFKnfzkszg6dZALMS5oSSz2uHTMIijDVWpQVWOHwfxLvLP+srk0ldGVo8edKe6712zDO9OjYhVAkzOsUBgviuy/RJHpR7DyoOQ+DIlTBCYORtq9ht25BuXdbMPsYJKAKlbPOE/f4pHcDzeapQDL4hgelNU06CN44QDv4EyjsS8LO80BV5Vsr4vew2ORnG7KF/sAMq9LcBOoIvhAXgi+knptzO4Lv7k5Yhk8aKiOUBUcKVTYIZ5qskxnoU94gt8c5XekHJKxEh1eTdpMv9cngBp9vPd3e5w29+f7gJLN4X4FjsvpngmrTZSA8QWbhLILEMkv4D60sUonGvm3UtT7nWc8IkjcPkKm6sN7oIWs1ZN5CZiZ3SuovLE5CvZrwQnYlLrt92cVDwe6h09SYS+xfZB+teDgjUDldoWJF1O6WFcRmhj67AlFLlY4t7Lu1dMCwy4cAjJfck6ZFyQgvQF8f+iVEc5Rpk18zrxcWqn2nPkU/IMNfOcZxh1KSi9yCAO0Nlac8Ky5rjjBAGNGJYX4tg3qeCRZyIMlD3tLnNm0lMsMbWBUraRqiF0L30iL6Cuqyw4R1IDDiPO9d+Tnh2t+U3qPdzJ05XOv9FXA+WEWoW5pIxj0Fm0K1vLyUZMEnne4wz8IxIlbJQOonthNfJ+cIU7J5o+BfhLNiJZj/RJ+m4s9npE1YoznGQ9aIuCIRaK8cz/IECPA9TJtqKmFsakDfU2sXgWkWmh3BeVMKCE5SnfJgHVivteFUz2XFqS2570VI+q3iIfPKkWGXoxBcaW4mI0Y1XLu2pKTqrBxly9JLmQKXh0719G3XgWRoa4OtT8urVs80G4ZeMLaj4ZtGxLGNY+i/npX4RiE0+FREbCXdvsZu5a+gYRhGOJC0GCYtnoWtDx/+el0nkdjurO70BI/ZcM09ITIIpN1u5IUYvE4nR/76f6/xbwB3z7uWlMO+8iih+e57bBoOYWKEql2gb5x/y+bEMy+sbdSeaaGCg1RkiQqKPV4JH6HVuuHYJBiCoUUVcpILKHox/BsHx5PUffpG9KHY7cjI39GOXftW6n2KSVwgr+Fv3LGw9wRTAOS7XEi19bDniAQJQUe4/SD5vlNzfgki7emAmhmYcYGcBaZGsnZYZDge0sPOszSblgp0iSaLOeDYUESe4vp9vj5yh+FM0uxyYvWSI4LscA+uwF4CtOQDoyXwAazhccOM X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: LH0ykDo6rjwWml5ndidyhW5I8iiYzYbR5cXrrXq0BkJF/TaAla7OzloREGkQ7nejJ2I4gijQS3ofo+jJngd0YzcB96NySN7ECM/fxf9GRn2wjgzI5d0EuCmvQBzMTHYJ475Bw0CjA3x6iT15nJTk9W5NBvBULkDEUeuQGg+bt9DslyFkkUkIhfD87Jzey4WbpJ43nbzhBCcCQeY9Hh4IfmmGWUkr64VzRrh6QEkvaEpUAmI05xw8bofnURV1NQR86o9k55aoMgcRG/yc4bbktLHpTY1xb79Pqm6STTOoevnqgJ8AzJb9AVsUu2bJs6GXA4Ikkv6/Ih/YBy/b+GeExNskgdRbjtY5m7avhIkHR3AM9ddOaQD21P4QnOK53aYtFxyT7gBBPTF0BfdozlcgJm6BZDBRTIGhRr/1Z7IedKMMhKWwEz6vyQ9PC0ChLdJrIkk1bYsXQyYgJf2bTf5rOD3CqhQWguqNAr16w3UF1SDPQDvCrts4o76kg0YGq29Nqvl/aoLxHdEkj1jwFVfC2M2US3EMBO3515BlVfL//VwkwXK63JeC3H+pBHh0I2T0LS2kb57Sg+5osyXF9wdIX4O9XhAnyP9hyFgepfXobpx++VOvlG0bje8eYeKVxmdDsr7KqQkzlUa6pgzejsz0/xtH3li+vQZZHSiWANrtQaAIEx5G3HNz3PAK/dhmnCfb5wbNRmpoMDk59Jt9z4Ca0w8BumKK5HJV78H82LtvLTCnHn9/YWK8Zvq9k0PieH8yzgFwnr0CU544f+DobEqJUiYjsDpergb0CFj56rHadrEcMN79zlgdgb+QKqoQ7wK5H+AK58Pl5bEHO7IrfBrpbKzj4stR/0XLqX/wcKKavcRjIb5fxhK+p0uTienGpiHcL9IGkeeunoVB30kfEXc18nqXPyOxxkflIVq6jRENIfBjp/la6TnMlVoy90JftQrY+H3ctmIP/m7U/cJAU/uRC6SdegH7Y+N3PSMTVD0H/8rF0fCd3DZaGMAaaXTxud1Tq55evldjegky3W0a4Qy46lSotSYKTPrAC/nhUWKsHTx3sjK6WX9DyzXuDF8CwhswGa8npXB6uvXhdRkcwVq8OqDW2TbA7xNsRJSYXTxEfvuzusk/LlTWiQoW1ZBTRNe9bwCSBPnAUoe9fnRlpuA8wfG40ga0MQY6hU8hJ0usK2YPjaSh3Ru+RCIuP5tBCj0CSrEBV3qi8J4+n/iVIH+hRYYAkm+Ke11HXhKEqSfbUbnAYEDRQB17CIkRz3AxbqNzDpaajQ3wSZ8UBwpFrVyuKwtX4DB26KbkPTxkJ3VisBAoYhUqBcEalFNQvxe3ewrHRqtU8rhuorzOQBbguiB3HloA/kR2lpoW3mVGsgb+qcaJpeOXuTgtKmssCrhv1P7/ieX2iGTAUDwk0aCiByDybE3egPrqLctH0SN9JEdFnrSum59BaTf39RAntg9RzJn6NFMbZMl8Et+IGDdWxvGTZ/S54K3wEYG8PeJdS9Nd4wKtP8mzcDbitMj4vo5RllKhPXoY4ZBnkcYyh5KqKDuMEqdHmWuncLEycUbWUy7PNGGEzaXqueb7gN28wbeQczlr X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: f808073b-e029-4f84-44a0-08dd50a30705 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:05:28.5514 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: sRK6fRsvibXOn8sZEYZK9936LZgaRF9ll86ltORcEEn9mSZ9PCuyUGk6VSxhCB1pOZCnLbQyfAfRNsGAcd6BDA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 Previously dax pages were skipped by the pagewalk code as pud_special() or vm_normal_page{_pmd}() would be false for DAX pages. Now that dax pages are refcounted normally that is no longer the case, so add explicit checks to skip them. Signed-off-by: Alistair Popple --- include/linux/memremap.h | 11 +++++++++++ mm/pagewalk.c | 12 ++++++++++-- 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 4aa1519..54e8b57 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -198,6 +198,17 @@ static inline bool folio_is_fsdax(const struct folio *folio) return is_fsdax_page(&folio->page); } +static inline bool is_devdax_page(const struct page *page) +{ + return is_zone_device_page(page) && + page_pgmap(page)->type == MEMORY_DEVICE_GENERIC; +} + +static inline bool folio_is_devdax(const struct folio *folio) +{ + return is_devdax_page(&folio->page); +} + #ifdef CONFIG_ZONE_DEVICE void zone_device_page_init(struct page *page); void *memremap_pages(struct dev_pagemap *pgmap, int nid); diff --git a/mm/pagewalk.c b/mm/pagewalk.c index e478777..0dfb9c2 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -884,6 +884,12 @@ struct folio *folio_walk_start(struct folio_walk *fw, * support PUD mappings in VM_PFNMAP|VM_MIXEDMAP VMAs. */ page = pud_page(pud); + + if (is_devdax_page(page)) { + spin_unlock(ptl); + goto not_found; + } + goto found; } @@ -911,7 +917,8 @@ struct folio *folio_walk_start(struct folio_walk *fw, goto pte_table; } else if (pmd_present(pmd)) { page = vm_normal_page_pmd(vma, addr, pmd); - if (page) { + if (page && !is_devdax_page(page) && + !is_fsdax_page(page)) { goto found; } else if ((flags & FW_ZEROPAGE) && is_huge_zero_pmd(pmd)) { @@ -945,7 +952,8 @@ struct folio *folio_walk_start(struct folio_walk *fw, if (pte_present(pte)) { page = vm_normal_page(vma, addr, pte); - if (page) + if (page && !is_devdax_page(page) && + !is_fsdax_page(page)) goto found; if ((flags & FW_ZEROPAGE) && is_zero_pfn(pte_pfn(pte))) { From patchwork Wed Feb 19 05:04:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981493 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2046.outbound.protection.outlook.com [40.107.236.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82E8D1B21BA; Wed, 19 Feb 2025 05:05:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.46 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941538; cv=fail; b=Akti91oXUoMTb5CRJO+mfv0iVvwSsw6tiVvB/ITMQzdGUJcHrF1H/aNexxEnGhK6K4frvbyCsq9BuPpRD65hqxXGIob/QO4yQ6WLNpFbM+Gh5jjdEVxHPPpzyRS2iXUHyU5v0ARv3fIVUv08Ff6sOiieYshmKpeyuy/u39lOwy8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941538; c=relaxed/simple; bh=3v/620T4q5VMX3NjX0TDQE451+eZEd9kWYcrsMwInlE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=TlKRTo+TJtyba4ThnNQeqlJKBu8DPaL6UriLM6gzeAiDQ411j8hEQLgRq9FW0QOdIGLmUjNBIIR/C73Dljv7nZrmKDtda4KgqdhJD7XRF6J9Wu+8pQQW3F5BCXJ+YylmApX9cv+TGm4OmjTrEPnQWb7l2U2wmlRrsOhiQSYNxg8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=HCZYV9fG; arc=fail smtp.client-ip=40.107.236.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="HCZYV9fG" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=YD+r+gRnFh4sRDzZvCJPMWX0MI/gcJb7xueu540n4t0/E/FqTd6otpyODAxCBCpDtZPlz2arRWgABm0txEHN68wdsyTl+RqnzWqU5ti36cENvMsZgxDKBkzK4aYoKqmGbkDZZXX2XXHo8u+i7+Ta6xM1+zFOi9Edc/xo9f3xYA+63sSNkFs8RWB5gBVzx6/3QdECREB0dg1mhHd4TM2vv64efkkOJ4Sugpa7C/tKASEuu7L1p80vFy9kmUV6sw79HKBuoIrZX+AB1DT+MIKJpiewiiYq7WlwU3nSqL/bIJaY0QGgXpaXYTWJEkeZoGB7uu9OEKatw5lpZZLTupud2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PqW5wbkwqvV5B3nuu4dEDF+ITb6K9Bba+lcHfcAQKM8=; b=flVtR7Sc0jrJUt95UUxCxrB3g4AcZu94TIKyltDzw28l3gC6L2hv0wOc/XMk6VltuNFLPzevSfiAeE+fNREX0LrBEsdlqBQKiMv+CXUMqi5oelsS29ULyaitJ95wbNM2hWupeOYSlOT+ewspKt8wp055dzrtE2QyK5Qp4Hv5rLSfOeu/2LYKIKauaaSTUJ1DbxyFae4ZOjwHy0aAD5H4ecVS2ijeixk18v1byY7a78ZDyp+ADw+RbTJMalHEjBFMsVg5AZZN2+/rSE8B9BQefSfzc59FHRdkukLatKUvVWn2ehZGdiFXrb7O1dvMVKX1+Vb/TknLLCX9HGTxlE51pQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PqW5wbkwqvV5B3nuu4dEDF+ITb6K9Bba+lcHfcAQKM8=; b=HCZYV9fGXr5fXY2/EOwE0VfG/zat4Fg0Bm+65aMl0SaOB5NBCKl4/qX3dPXNGYOo4VBNJ+SYDUTufEESYIIL4YpLE5NgVQcG+MQJP/s4YpAhNdmv9oN/GR+hThSrJQVDgcjIAowuOZJAfpdrPqXvygGoeIX5nk4NPICCOzyP5071i+Kvc4kKFwd4e3HH7uKIQz8/DY/ATV/MzkKBgf8dK4qXyajUj8P+Hp3+EiomLY3zhwl9NV7P41I5u1hP8pHYHH2062I/CZt/HhIORylG0IlATACqv2h+VxWyVlV3EaunSlviAQ94t0x+5kwfaPM2a/snSZLix/jyAm88akfe6g== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:05:34 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:05:34 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 04/12] mm: Convert vmf_insert_mixed() from using pte_devmap to pte_special Date: Wed, 19 Feb 2025 16:04:48 +1100 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0104.ausprd01.prod.outlook.com (2603:10c6:10:1::20) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: 099570c7-cd6e-4c53-4472-08dd50a30a38 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: 7GcTt6DlpUgff3/sl9TDk5PmNJHJNMMGlLdd5iMrK3dhXGwTXetn+0uvf6BVDqAIYrLo8sXniWBwCpHBjmjCd+fc4cXhph1n68TSjPHCZ5im7icPOrdkCTvW1UEUX+j8Cx703z+iXmuV3YdrCmwhR/2SmD61FWPjoI+72ITg0H6SndatTOt6I+9Ap2K8+7DAL0YHQ4VtN+2Gh2oN3HU3Fsnu+ipxNCGbhZnKZUqXRRUJSIj4rUp0TOa+x3S8zx5ccsiqRGXcQwuVdlXqyFYbJ/G/NZRIpzrNXM1yRiVcZbmsFj+WQj4v+wywz64BelpJKTeeIf/bD+qE0LsRW+xQ+BEXsefEh/PJ7bQsQEPbKuyyM8842hSzz/dmHaZDbduAMeIqL3qu/j9oCqsY2qmfs7iOMMD6qkRS6sTjlVmMpgRHL1i0jqnNHO0oPMvAQpLnSt98igq8AZkUlATa+0TCJlk0XJnKLjuWoWkJVolgvkNhT41PVbQa2/uuNt7CPzr96t+xzrVNS55+o6G7C2Q+4YC2eVOcA8WULJ7bjlB7VNUbQlvvpuOurXhNOn4/XV/FaAEPPVl2RD3P8tTtvTOxKTAHdTvQNFikKG7xBKizPFQibF+Zm6mkhxX8om8XrgLwcUFCunnaurh/crBKy3XyZ3KKRg20Mi+QxEpBnI50foleKk/e2oZf+GWFNq+WEApEVNf+GY39x5V+mOxzGsrxfMazOe6RvfOz5bBIMD2Aj0sy/Zla6nS+G4UXEZ2g9CFM8OGAkPei4pDJkXw9IaUtyePuch0xTWPDe5gFwHGIcyzwSegGYwavK1SNN52MdtsnXGJLV+6vpkwnlxaeoGNxtOdvgaGPIenrXWfqpBUFQQI5h6Da8UJh8nh4WxDy2b+9Qht0Ez98o5ospOwEZ3Y8zlNbBbNi8hHuZNIRA8wHYxhHu5eDktjgWSCQITypfnE/krDYPinhVV9fpA9jTBn7lV1bI20GhwYg+aBf6o8TIto94GS5LIIXPdlr0R26FG6p2IC0mPwLcFPzTAXser2VTm3wW3aX5MimqVTOBOqeqPZmAexguOg8VCzP2OQ8/pqHoZMK79Jhf+7v+Gak4TqSGE0UDL6lS14lekzUoScrH9Gn9hfoVgCSIYLWlTSddzJCacuv6rhp9xEpNz21JYCInmY4EEIs85LLCNC6dEQLiO19DbQoTRK81ys9YtZ1+CE9aS/5Ihb+mIxrlcEp9tdQLgs8jcZE1+Q4am7LR+rNe7VLyIR+i+2YPmTa1j4KZtcSMY2BIrpRknboq0dU1h/sa5tX2bXn8tOP9IBKUCcXsMBvMEeefuOrUzH1GoFuW5XhLBux3rWKt6sXGjTmy0Ifdau/uSC+8+jpF9qIpqrHwg+PDIGMLM/8x3fSrUUNHlJS X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: F71iY8xr7F7d6DEwiEa9hLQzZ7uCLmXUXS+Ubf+wsDwpFlkAvpnXA7M4HoDT4qddBXB4+SJX45/JbOHsG17er+evGrwoa5QPOn1xG+fbGrov6fvKH04RZUnbrT9us3OPrwfM3B0dum9SPNHkVTc45q9FmKcAjQQ9MJFV5Ql4EUVLgC5m3BWGWLwSI1F4LJ2EHanudX31WpGNowiGmajIjAVF8zlXDhKE3ady/HTGMy31o2cC+R/hDTPnl1Pq4+BmYifvxgTdNKClop+avYEklrlgKT+Cbk/2pXafV5uXHb1sGXaL6Cpm8gygPVDf8o7Gv88/jh6p2bPRfCTjWoAOK/w2kakOhZDiL6iCjeYEbjJxi81nNmT68fU+XDyU+EQLY88HWWIasMkqjM68zoRi0NtoA6Pd+4ooTh7TsiTOou1ijORPOPGohvz5qmoSQ3PtN2Izognw4FHBNNZ6q6dgunVH4KsLHSGE9Y59kluy8lJO8cPQ7kG2sRKwuqXp7sy4ioxQcKdOVYeMkoMh4FBj/cxcOgAjuK0jadRIDQon4nUsvDX2XNgAjKa1pk3Dd+2+d60cDtfwXjR9sarnekmv5vaIyiDYsRCc7CZmKVBUHkDdR062DFB4EB9ZI+X2IeV50/LsuDbFNoWCNxbWKSkylzBkJfU37pePEvGJo8/BHdD8HGWZ+Yfc3ZRN3CIpluEKrwlU8TlT2vuhncDUVAJ1f1sC0MIyaUBSqV7oFW6WkKcvgbQeE8Etn/Cgs9J2IBJ3WlZx7QKzdGy5Dkq/JEQE10A+SNGLRjIYaDflpBf6P8u2LOCxiH0placZgx6H9kw1xDM1Eh3oA3jZJdAz59TQCylGu2+ahlVHd5uFPIbtHnhhz1XY6LXohKJDAYgBez455FI59ANlb3xTrNtkWsIWHgid2QkI73aO4NIwNQvmevmr4nTHmRE7EzCsg180B+2V1ZB6BcWGY4a23iVhEaSdr7ezzqb/UkTzyPF1vOJAxlZVcE4FAtBjqLkbhgAthFjp8tF9JS+OEAhp2LSmfSJFL3vqXqfadEegjPSdd3MV0NBayyNjFy38r4nYZcGpcayF/qitGILgvK8ED8MobCxQAXonjpJBvd7ZlFz2pqPx6mu1GN4to+Y1GfayWvXzVkvc+G8SbUSiGG3YoS25hQPvzdqxLGveA18nX/8IFj+X+i1eJwj3B+y+6nAq4Sx4mG9mL7nGZ3kvOT9MVz8kqWl3q39vUXgb+SHuNeu5iZTLz1Vz5wch3odlQkLiDqnNqwqftmcEEULOLHZVOz0lzpi6ZSHUNg5rP/pv3QybjWSjKvSIKInpj2+i9po+ROjInQ1JtidZ3h8sOq/b6/JdFn/x0ios25GUdWV/aBJl3pm5F92P6J5dj5xz4WcFwnFjFYR43rdHadDxgS9MOq8rpcWzHdgwkleshOm5DKHBL2X6f+zuPbSRxXqM9NBN6DBxX8XZHXXkzbDzGGoJ+IUoL2c+dNHOuiQ1wzjPWSD20xnA7ERzPVdMl3lRkPfxu3NgouMWPMS7KxcLJ5DVdE4dfuSbCPSrbT/NumdLRwn2688wvs8cqfFpFz98vmVtgdXGdT0s X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 099570c7-cd6e-4c53-4472-08dd50a30a38 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:05:33.9602 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: W7SvnapCL/7OSv8VAMxay5auGMsBdtBDnbhwLiTCKiuB+BHo05Zcm777hVL+WmD0yvqlP0YKowVgUMo7b9tczw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 DAX no longer requires device PTEs as it always has a ZONE_DEVICE page associated with the PTE that can be reference counted normally. Other users of pte_devmap are drivers that set PFN_DEV when calling vmf_insert_mixed() which ensures vm_normal_page() returns NULL for these entries. There is no reason to distinguish these pte_devmap users so in order to free up a PTE bit use pte_special instead for entries created with vmf_insert_mixed(). This will ensure vm_normal_page() will continue to return NULL for these pages. Architectures that don't support pte_special also don't support pte_devmap so those will continue to rely on pfn_valid() to determine if the page can be mapped. Signed-off-by: Alistair Popple --- mm/hmm.c | 3 --- mm/memory.c | 20 ++------------------ mm/vmscan.c | 2 +- 3 files changed, 3 insertions(+), 22 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index db12c0a..9e43008 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -292,13 +292,10 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, goto fault; /* - * Bypass devmap pte such as DAX page when all pfn requested - * flags(pfn_req_flags) are fulfilled. * Since each architecture defines a struct page for the zero page, just * fall through and treat it like a normal page. */ if (!vm_normal_page(walk->vma, addr, pte) && - !pte_devmap(pte) && !is_zero_pfn(pte_pfn(pte))) { if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) { pte_unmap(ptep); diff --git a/mm/memory.c b/mm/memory.c index bdc8dce..84447c7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -605,16 +605,6 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, return NULL; if (is_zero_pfn(pfn)) return NULL; - if (pte_devmap(pte)) - /* - * NOTE: New users of ZONE_DEVICE will not set pte_devmap() - * and will have refcounts incremented on their struct pages - * when they are inserted into PTEs, thus they are safe to - * return here. Legacy ZONE_DEVICE pages that set pte_devmap() - * do not have refcounts. Example of legacy ZONE_DEVICE is - * MEMORY_DEVICE_FS_DAX type in pmem or virtio_fs drivers. - */ - return NULL; print_bad_pte(vma, addr, pte, NULL); return NULL; @@ -2454,10 +2444,7 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, } /* Ok, finally just insert the thing.. */ - if (pfn_t_devmap(pfn)) - entry = pte_mkdevmap(pfn_t_pte(pfn, prot)); - else - entry = pte_mkspecial(pfn_t_pte(pfn, prot)); + entry = pte_mkspecial(pfn_t_pte(pfn, prot)); if (mkwrite) { entry = pte_mkyoung(entry); @@ -2568,8 +2555,6 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn, bool mkwrite) /* these checks mirror the abort conditions in vm_normal_page */ if (vma->vm_flags & VM_MIXEDMAP) return true; - if (pfn_t_devmap(pfn)) - return true; if (is_zero_pfn(pfn_t_to_pfn(pfn))) return true; return false; @@ -2599,8 +2584,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, * than insert_pfn). If a zero_pfn were inserted into a VM_MIXEDMAP * without pte special, it would there be refcounted as a normal page. */ - if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && - !pfn_t_devmap(pfn) && pfn_t_valid(pfn)) { + if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pfn_t_valid(pfn)) { struct page *page; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index fcca38b..b7b4b7f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3377,7 +3377,7 @@ static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned if (!pte_present(pte) || is_zero_pfn(pfn)) return -1; - if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) + if (WARN_ON_ONCE(pte_special(pte))) return -1; if (!pte_young(pte) && !mm_has_notifiers(vma->vm_mm)) From patchwork Wed Feb 19 05:04:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981494 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2066.outbound.protection.outlook.com [40.107.223.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D49B41BEF7A; Wed, 19 Feb 2025 05:05:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.66 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941543; cv=fail; b=VvtGIuW2DdfUgJiyb7O6Vgh73LIzI2pH/bEl19OMlmH965U+Xk25IpES40UJZVm3oExsvALmkQ5AM669gxYLGb3JDyPQKB6CstgREfCNdauaBksZMolQJ8505Hr/h7W2eB4YPai7zdlx4jNz6IoBnC5sMZrPMyo+pHK+DKh2WF4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941543; c=relaxed/simple; bh=KS3uzy1H4GowZi5gMZcXF+h94IY3DVQ1c4NmharWc58=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=IrfbhjVXNzSreufSBtCrlITLQXLnQ4RfbRQMR/l6letyKX9u/UYr2q/88KNqITrF9A4tLC1j1GZB0ayhIOOzPsU2u5OFKGcEAeB5LrbnFDafCqdmyj+sPGBcaLl/xmHhg2MSIcFRbKlPnIzoWIPXMOxV8zWKxAP5HZKamnQpQ3U= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=VWDEr3lp; arc=fail smtp.client-ip=40.107.223.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="VWDEr3lp" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=g2lM3ndtmgOOQ2kFM/zbIdMHdtR+hRlusJQStmsfjhJlUFNxpgJL4X7kUDB3MGs4Dt9DZsZxJorbePRa9ttYffx4PxPxX3ucR2iydy45/o/vSf8pJKiT8WaWP5WdX3vkCboLeZOh76fH1h0qic3G5bqxl+lf22qyLoNHIHW2mvygmoCcm9BkDOcjGASFr6360vCm+frXr7RdLNNK77wBDDU8M4Vv2AfWZKo46IhDOrFPhI1x4uI8ZZHGp28BxvehhLMuZ4L04DCnNo8vAzl/Dp/HL/AVS4h/M132qLMveWKWITWvHpIjIxwaVrhGlciopW8g/CHdtOjeifqYJZpENQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NDaeI1hYQn5SYGceC5I/yp81jrgefyafnpw5hA9x4Qw=; b=sD+UJRL6++cSbVPgLpool8HhHxl8RZwDBd+kwrkA2FKcwWKfB/RNpm9sWStontfWMKRRpiwcvQ5QTrk3v9TVSC5qm8xOJWE0TIoPXi2NJ9k069UAxgEJUiL73dtZ38itedGZaOnDj8RxqDFJ4Su7UV7XWFP2tu69fzP3C+vZkWMdL0IJ7PjQoGqMwU3E0MtUBZGP22bJf44KcesYXmreIslOUU2V/BvojVD3PvueuerILLoHINMUHJfLPKGMbMjK1DVXA42xr+nWGvBRuQxjpwKQtQqUHIvGfTZxBDs3iUtkMeu6e8ZIqRXLnKHqUc6qzrskPvjcHTPQ/zQNXxT/6A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NDaeI1hYQn5SYGceC5I/yp81jrgefyafnpw5hA9x4Qw=; b=VWDEr3lpdmhp9ZnxZIc0OrKGWkcYX9AZhc4npMi0jNUH6dxAEbmN6nO3LmW+zQA/T90Pgg541mk/nWKJllyduGml6q/L72DROf3zKgk1+S3tGcytrOcasgiDMFvx46VULAeBsZ2/jInms/zOUQhsnfMcndcV7O9DHWqeR308y+GuwyYsK7snDD/K25oyiH8y4U5qKXTLp+lx154ft1830ctmnVNmSceeWoa80D0l1k/ZfzeS5anC74BzOi0EXanMsnYznf0ZpQR5gRrhQ3SS9JVLC0WBBJBS1kkyyGoL2XQPcvay25/lB7DCCf8LlgzjHwqEPoAhWCZMYSMXz+3KqA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:05:39 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:05:39 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 05/12] mm: Remove remaining uses of PFN_DEV Date: Wed, 19 Feb 2025 16:04:49 +1100 Message-ID: <32f2dec28a8e9fb5e193989c5c69ea269dc70dce.1739941374.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SY5PR01CA0067.ausprd01.prod.outlook.com (2603:10c6:10:1f4::14) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: fccecf5f-be5e-4f16-42b6-08dd50a30d64 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: qRIRGyCjh4GacOCcrIJ2q2bZqwdDaOHeEedhX78PE2iimmhJV7ifl/tWWcQiJ3ppnHo3QVP5YBR9coOBd8DBW7BRXN0az0KJnVUM1/2Aw0AC7cXaAy1VahuizvsKDnB7v+KVvS1Um5aQ4oR6Zb1ydmUumv5oB/9XurIkP9lQoYMwF0QuxG136Gdl03i64BdGRCuNYxr4mKplHMsgX1kesTU8156Bg+8pIhD5HaoWDWll+KsxYUKcydyCekM947TXamKX6PTn1VU0+cMn8S6bnp6/Ne7Gm5eZQpCKIZ6wtTlz9Ti7noy+2Qf1d3jguXUAtRNvpiNLiSeRTJOE7fOP6dorUtNhif17QMZj0tEMcFGSIyUNIJRIvL2GeqechtMkLBcSaoApas+PqhGWXem3Dxw/yUFG/E+mWvtJptnVjwceyHq7fbnZcPkSR7JlivzZ7AQthUvUeMfX0vFD+2JICmgrn9UjEfVJnSLo2IkIciRQE3ifc61A0xQW7DS6ZuIIVWB6sm3xS/fJtSt7Y6Hx0C/vOaTY2u0HmYoI5eMxR2TsMXl7NrR2JEIvOZV3EXCeON3DMV8wND8QooYLu9nC+aW8nU4/CcfQFHzCvaTqxxrJAReSgHxVYWkMBZ+TIIrPcAeKS2sTqQehKP9S8lGFlqOUtC1cm3CmfTYswRFytINy6wQrHfhDtgdvv2gvbtKAjayQerDs4u06YfK3ymZQ0naRztAPFXYOsNjv4Hw2QpzQ+zO7ih1PBACOQRfYg1eXOIKEKdSsPWhyeS/rKZ5bhpYXuf3Z6r7y2fIdlS7N4NsKD5AenF2miAzCz7BkdH81vLANqabrQ+sxyt5IEd5rBSFLChxgOoaaeHyRa1oahYffNS3iaomwvY5XO3nQtdeERm1xFqNu5O5/sLR3IyT4CCYF1JOU2Nq6b4j/KprR1RtJWYaI+2hhQG3DwXK8LK/qqeag0MytYnlpXwPWzp1/4QwSGe39NbpGTh/fWqTcoQeZBeoaP1arlHUdxJCYUTlGwTunJv5BSZ2pefOZwofyazSBdUrH0ubvCbLKeQ5qxXZduqzaLiPlsffkaDTxPcP3Lb3mRAjhnJvYHvX4yARD0PYCFU6SdkpL5fiXuA2qDdrCWrq3SVanrVJAclDn133Lr2oEtE2zeUty7eWSLar5QodxDIqqXVVx/KUTg+vXM1v1VYoHdX8Bx64ak0rtYgANZ8MdJpHr9juqfqMsEOxx4CzEYHBJ3+9QJ3tQLETQzP0ImD6Eck0vM8GHB0aYcDNS54tqMibrE4gpfawdfLCJAs6Eequ9D+cevXKgejBPLE/MkqeasMFWoMUsL6q8wv6LDPUqB0woVUPqPZJaGbbuwabCs8J7Ng4rgToPsk9aRnbZjusKvPSifzInEmQMAxBr X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: kO/bfTQj9KM59R9qL0XdzR9SpK9wLdDjHa1ajX9ZoiY5QUPbQtONQuUuO558jL1wto80JNy5/ZW6gNyufOYrCz6CDcpRM+Zta3cQjOFWYZJGrGjaOO3veVPidn6EKp5o/BaWDpZyGlX6MIgyccijlrHf/w2vVjgg0OQbXErI7wkGXmHOFDyUOHSDAENjSUAmjn8PZZuTIfDtsFeS7tXcNArnc16cOgjEe4pATI9wDd2WE+3aI2DWebr14ReE32nPyImslIdR9WnnaWb/h9HU0B60KhFIQJw47tNugIWgyFP8ksh6TwlHlQhs2BNx9BnfQnsJc7n9TMnd7jFS0WOry7/AWS1meXhNg94+HPLR9ytoyDAlnmbi5+1r1P6p1LxcSGe/RiOpS8kZ6i9ATzU4rxfH/92fuI7GucVK2GfN8KkHW2Ghn5NisUWbcJdFwwgevAUxaZuqUkBkxKeSHJfAQyhHzavNUtaeCAMhEP4uGFfUXIXaU1epUUa8KjCFrHORHqYjoSX9n6NTpVy06T3tJSw4q/gOet54C+QCrGig2sOmywV+gYZ8ZG07aFdO1rgzn2W6Ng2e8TahyZyHSAwPz1BVivcB/g7V8BreNId6dEGaZjD/YnrdCEBCj6cZAZ1C88vKmpbHNFM1tsYbzAXZE3r22E1TWxvr7brSJ78y606eKLMHdDooJIaHlwCvH05hJ6HmK29haMmk9Rd221XIpq70G6grA/xekIM7EbYR4p7KPiF1DTyhuy0KhzeORSIx2iLpnpSMOVGL5eA+0xfHkfGtnbvQuX9jp1hCdf2obWtLTFJ7c9dzqhk0Aey6RmRrHUjIWau6uuCHfcvX8sW9PWfHyu5Q8Ca/Qd5Nv1InfHWtKqP9GKE5odWqSW3XUOcTiVdaIdZKsvRF4Bcobn0nU5g/fx8r86DS7DOafj9EZ6UTEnACpMUfQLevtQ6mOUtHm3WksXx2A8ZqXd0d4VLO9AEGuf345g/0EJA2RLb0JI0yNaqK9qDBIJKpHFipuftNamR6qwgfnN7adJmVk/AOq4Yp7uBFeI/iIEW+kyHDXsF1VV52PZIWKgdSx8AZIxzIcL9XnWXaLIKDBhhR0cVJ8vFcJnDXC9i2qdqAi7MYlSi+BD2cYv6ZO3voT277acZ46NP6kz63jQBTv6hdeVFfMycIYtt4lwLJGppROtsjzMevkAg6STVtOzIQLvJCTzknd3PKaiwniLwBAgo3aauRpZoa1Bj4yH1EGMKZdHQzFk/bzuPPXfFz90cH/cOhEK7Wsllj7bf3MwHJ3jeBwpR1kLdzFdVUhdv/ABV/+fIFnxPOhTJHb4bjgrsJoL42pOSKjDWcy7SyUkXS+97DW8Si66BmGk4nUR2L9TGfxsW+ful/hAuYgxha3gEuDVj2KoJGUIPfC6WKaRtXT7zeOZF338snAOJGEhxW4Jh1LufpNYEKkfh148vfrxaJHtmUKktqW8lgfM69bbOnS1RQ5VKa6+VTp3mqDlfOBivJ5X8vn1NCKnTmE/PZU4UoMgA90AOVAr2+fe3wKxSaMsWc/G5SiMijG+8Id90JilBR4nE/6NQlo7dl2FIxBtmWnrFk8oRf X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: fccecf5f-be5e-4f16-42b6-08dd50a30d64 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:05:39.3943 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: blvxif8IWZ49Gnyet4B8QUgpPD8oVPRwCn0Jt6A+w8e3WQh8dTEQzBLFoPHnRcJVgziqbI3QC0Z4sliqlde1ZA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 PFN_DEV was used by callers of dax_direct_access() to figure out if the returned PFN is associated with a page using pfn_t_has_page() or not. However all DAX PFNs now require an assoicated ZONE_DEVICE page so can assume a page exists. Other users of PFN_DEV were setting it before calling vmf_insert_mixed(). This is unnecessary as it is no longer checked, instead relying on pfn_valid() to determine if there is an associated page or not. Signed-off-by: Alistair Popple Reviewed-by: Christoph Hellwig --- drivers/gpu/drm/gma500/fbdev.c | 2 +- drivers/gpu/drm/omapdrm/omap_gem.c | 5 ++--- drivers/s390/block/dcssblk.c | 3 +-- drivers/vfio/pci/vfio_pci_core.c | 6 ++---- fs/cramfs/inode.c | 2 +- include/linux/pfn_t.h | 25 ++----------------------- mm/memory.c | 4 ++-- 7 files changed, 11 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/gma500/fbdev.c b/drivers/gpu/drm/gma500/fbdev.c index 8edefea..109efdc 100644 --- a/drivers/gpu/drm/gma500/fbdev.c +++ b/drivers/gpu/drm/gma500/fbdev.c @@ -33,7 +33,7 @@ static vm_fault_t psb_fbdev_vm_fault(struct vm_fault *vmf) vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); for (i = 0; i < page_num; ++i) { - err = vmf_insert_mixed(vma, address, __pfn_to_pfn_t(pfn, PFN_DEV)); + err = vmf_insert_mixed(vma, address, __pfn_to_pfn_t(pfn, 0)); if (unlikely(err & VM_FAULT_ERROR)) break; address += PAGE_SIZE; diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c index b9c67e4..9df05b2 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem.c +++ b/drivers/gpu/drm/omapdrm/omap_gem.c @@ -371,8 +371,7 @@ static vm_fault_t omap_gem_fault_1d(struct drm_gem_object *obj, VERB("Inserting %p pfn %lx, pa %lx", (void *)vmf->address, pfn, pfn << PAGE_SHIFT); - return vmf_insert_mixed(vma, vmf->address, - __pfn_to_pfn_t(pfn, PFN_DEV)); + return vmf_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, 0)); } /* Special handling for the case of faulting in 2d tiled buffers */ @@ -468,7 +467,7 @@ static vm_fault_t omap_gem_fault_2d(struct drm_gem_object *obj, for (i = n; i > 0; i--) { ret = vmf_insert_mixed(vma, - vaddr, __pfn_to_pfn_t(pfn, PFN_DEV)); + vaddr, __pfn_to_pfn_t(pfn, 0)); if (ret & VM_FAULT_ERROR) break; pfn += priv->usergart[fmt].stride_pfn; diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c index 7248e54..02d7a21 100644 --- a/drivers/s390/block/dcssblk.c +++ b/drivers/s390/block/dcssblk.c @@ -923,8 +923,7 @@ __dcssblk_direct_access(struct dcssblk_dev_info *dev_info, pgoff_t pgoff, if (kaddr) *kaddr = __va(dev_info->start + offset); if (pfn) - *pfn = __pfn_to_pfn_t(PFN_DOWN(dev_info->start + offset), - PFN_DEV); + *pfn = __pfn_to_pfn_t(PFN_DOWN(dev_info->start + offset), 0); return (dev_sz - offset) / PAGE_SIZE; } diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index 586e49e..383e034 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -1677,14 +1677,12 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf, break; #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP case PMD_ORDER: - ret = vmf_insert_pfn_pmd(vmf, - __pfn_to_pfn_t(pfn, PFN_DEV), false); + ret = vmf_insert_pfn_pmd(vmf, __pfn_to_pfn_t(pfn, 0), false); break; #endif #ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP case PUD_ORDER: - ret = vmf_insert_pfn_pud(vmf, - __pfn_to_pfn_t(pfn, PFN_DEV), false); + ret = vmf_insert_pfn_pud(vmf, __pfn_to_pfn_t(pfn, 0), false); break; #endif default: diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c index b84d174..820a664 100644 --- a/fs/cramfs/inode.c +++ b/fs/cramfs/inode.c @@ -412,7 +412,7 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma) for (i = 0; i < pages && !ret; i++) { vm_fault_t vmf; unsigned long off = i * PAGE_SIZE; - pfn_t pfn = phys_to_pfn_t(address + off, PFN_DEV); + pfn_t pfn = phys_to_pfn_t(address + off, 0); vmf = vmf_insert_mixed(vma, vma->vm_start + off, pfn); if (vmf & VM_FAULT_ERROR) ret = vm_fault_to_errno(vmf, 0); diff --git a/include/linux/pfn_t.h b/include/linux/pfn_t.h index 46afa12..be8c174 100644 --- a/include/linux/pfn_t.h +++ b/include/linux/pfn_t.h @@ -8,10 +8,8 @@ * PFN_DEV - pfn is not covered by system memmap by default */ #define PFN_FLAGS_MASK (((u64) (~PAGE_MASK)) << (BITS_PER_LONG_LONG - PAGE_SHIFT)) -#define PFN_DEV (1ULL << (BITS_PER_LONG_LONG - 3)) -#define PFN_FLAGS_TRACE \ - { PFN_DEV, "DEV" } +#define PFN_FLAGS_TRACE { } static inline pfn_t __pfn_to_pfn_t(unsigned long pfn, u64 flags) { @@ -33,7 +31,7 @@ static inline pfn_t phys_to_pfn_t(phys_addr_t addr, u64 flags) static inline bool pfn_t_has_page(pfn_t pfn) { - return (pfn.val & PFN_DEV) == 0; + return true; } static inline unsigned long pfn_t_to_pfn(pfn_t pfn) @@ -84,23 +82,4 @@ static inline pud_t pfn_t_pud(pfn_t pfn, pgprot_t pgprot) #endif #endif -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline bool pfn_t_devmap(pfn_t pfn) -{ - const u64 flags = PFN_DEV; - - return (pfn.val & flags) == flags; -} -#else -static inline bool pfn_t_devmap(pfn_t pfn) -{ - return false; -} -pte_t pte_mkdevmap(pte_t pte); -pmd_t pmd_mkdevmap(pmd_t pmd); -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ - defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) -pud_t pud_mkdevmap(pud_t pud); -#endif -#endif /* CONFIG_ARCH_HAS_PTE_DEVMAP */ #endif /* _LINUX_PFN_T_H_ */ diff --git a/mm/memory.c b/mm/memory.c index 84447c7..a527c70 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2513,9 +2513,9 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, if (!pfn_modify_allowed(pfn, pgprot)) return VM_FAULT_SIGBUS; - track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV)); + track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, 0)); - return insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot, + return insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, 0), pgprot, false); } EXPORT_SYMBOL(vmf_insert_pfn_prot); From patchwork Wed Feb 19 05:04:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981495 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2078.outbound.protection.outlook.com [40.107.93.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B8F31C07F6; Wed, 19 Feb 2025 05:05:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.78 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941549; cv=fail; b=MLDHRxtjXU0jGItdRFUSVMyUebjfY5WFUKW3vFJMezJsq00iHVq4gpO7zIZa1DN33dLegTCpoK9CsDROUdJ+UPdlratiWz25xZSTA5TX+0udcd3BNNcRID4pAPf88Oq/50U2gDdGZCEV71Tf7MzP3qDjbu86JHbVu+TBye7cv2c= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941549; c=relaxed/simple; bh=rPA5S00EOKBWLD1PCFLiSimIf2UzNwJtGVtbCRWk5io=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=URVzSDU9ulp3M5cbrllBGlHwvx/Zzz5fw7GkQs5zp2X/zZ5ZSlLglhsgTnYA23O2qibgrWbguXqvCKHKCn8sbd0k8Ymr1AfusrFGjMAVI/CsA0e2xyl4PGg9HX0Wz4JZt3ubUxVBqzFC4RHtIJtIT1hkYSmXPaC0YoTmz/0ubQ4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=epCT/TD7; arc=fail smtp.client-ip=40.107.93.78 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="epCT/TD7" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VReVlRkeUgsGayqO7K/Pfa6voME02u1mFDdKZI6epNHz0iEgzDF2QlLRTa2u+cFFqKkhEMNT0dsoCsvGebzhwrlUl0JjOwYJL8pcXZ0miXt9IIJWFVrlQfZWS9NL1kIrkwmL/Gk8gygfCL69RH9gMTA1Dk+CAD4Klj6Fkygw8YM0jWFpzI8oeXW17De9rQgyJY03DI+D40dMAsHKrodSA8v2Vuu4Fq/ZxKUsiCsPL3ll8+nR3OmT8eAJqW9srVpFQQlJgwqIrWfHze/F5VWCEoziJ1YhpDJ8SH+gNhpgmOXJR90c/Hpb9MuElcrhW3dudtm8QyLuIFd6kFW+kl9YTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jObJa/rMZSPU+8Wdjomf0Tvi4esknf7mPa9topDk5IA=; b=au95sRy471JM6kd0z23k8RN+pGRqkKLdsozdPP3ITwtYdAiZE1ERcMbfqGzKc03F7/dxI5LlW1UBkN00j7u2z8Rcz0U3Bv5/wqYkg70EqLR8HhocStplLHU8NoaRCAZPkRqqL07yHrW5ggG1/3fIjxypyBEwj/XIZURCNx6VOLecQ7DMrbtvswt3Bd1ETkycqlH4V/GBGEWymZFu47KNvGKn/M80ZLhugKlo/F//P4BmrZuEHpwZUpDLCP1Z4RiMnw1DNsBJeuWF/prhbqzreJzIhqvW+SIo5pRSRrxjDfIfD8rFoi8CN0heAvrXIN6/XEB8MmxP3KARrseVY3Gazg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jObJa/rMZSPU+8Wdjomf0Tvi4esknf7mPa9topDk5IA=; b=epCT/TD7xAxYaM0zWGZXNDhAiajsR2auWHMEsdSDhGs1ZnbNX+ie3i6gxnqJ49vowD/6/nHaNKURRb5Ev+wH7LPlSvyKKuEpKRokbpDOyOVJ8f6bLwBF8b51nQku0baCQK1/aolDM878Ntj1P8G0Lvw+iuYXYn0BP0eoHrDTZbsxYn2ma2LpnmYNKwNlOqcu/W43HSRUSujKkQPUXWb9CydNeW+E1wwLmECFGHXX2QyNuvF7V7i4/ozenMlbv73aNUfP2k8hNomcWZbOL7zhLEXj0nQWT9zBw0odMMY1YUXqrQTOtg1eWzY+LSCoYp9ugDatdno5P2GR0Mx0Gchcyw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:05:44 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:05:44 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 06/12] mm/gup: Remove pXX_devmap usage from get_user_pages() Date: Wed, 19 Feb 2025 16:04:50 +1100 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0155.ausprd01.prod.outlook.com (2603:10c6:10:1ba::9) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: e0ae25c3-1f30-415a-dc76-08dd50a3105f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: WYBxxw5aVd3SDRk0k0k9wrtTB/Fmy33eKn2FSH3rmk8szPqrZJ18KJBm0+xUcrcL+xIKJGCKWlRNF+rWMFDpHIo2p8c275qEKa4vOXYl5dAYhrGUUrObMIWqJ9O48Xwt8Nq6228w2Wkb+3TW93y53MU7lkpNus3ow7s8E81tKn0m2p+j1/QDFpX74bhE2c/VWhsRcSdOQHidVo2CKJ5sFTqH+CYY5cNozi79beHH0MoGIvOwcSbLw/79LuQnk0Ustm1/IAPqYwtVpaXFGWQqTr5T7doIpVk8FgVwC+aez5zESzZ/NtMQ+vo6M/94Z8MTJ4m4hJMJnTQC86i7vxEW8LArwFsyaykqXI+rmAdArXME95z+4ZDptpBe8P/V96ftXt9BrnPkCfwl8mzSOQU7GSRChKlJ3U5KM0HwR1WUJ/oPY2teEl8UOOnycUBC2E57fEYv8LvhdAKK0gvpNEvxjYduTjTFOZC+C36gSuQSVYD1boQ3mOdKFxr1SWAJUETBnSetp9qpjXHXx5FyQjBtv9FdUdNCsc1JfjhyrpbV2acv7qprJ6R+kbq0LH6GzHT7Gcu2i2PXcGuSYq4qqjLvWe4jiMrTV/eRBbeEtpOCMGwoqEWIX7HdRG+klCcamEZz1rjrRaTVc+ryzY8uOMVEfsqNCTNWtK8tI/BI1NW7IlXZU9A7ylzYTK3qobzMkkjr1cEF5EwDwAL11y8LVnrJnP1WbEQ6Esu8tDLpBQ5rAQTGsHxsa0qX88Fijz7sMylB8xZ/GhYRIpiH/1V30Qx+Qp7SF7/8+SVVFFEyD9X8H04et68R8TKwjSX+r6CxU6oTnLDZcUu9KLL4bHcvbTe9snnLQ/Fq73v0WexhtNqJT4myRwGcNALfF3Pi8KdxOHQdFgzNAQbHnnZgiVzVrwwGNx1wniLpq9WTkL7g9u59+p7GG2+lZXhSh0bghOSCI36eNcfVdwTvakOL/NF8ABGoUYpdiljJnJpTp+HaZNHa4VGKyAH0rAhMZZc3JPJmu54d4ageWxPk98sQzDO4L6SPgJtptfwk/hbH9REUywmCq9q037GFbzVeg1mvMr3gkrFM5BRKrkp/Oj8wH1hwTVD+4U1DsHu9Js/0r+NuBSFPOv7CEmDQIF7YpchaL0YTqNGCpKSxEufPU61k9AA7vJtfabfOUas70PdkXyRe/fCOqoa8sxg5jaNWJB870CqlQ8IsqVono1YruX2aAujjdMVUreIqC631vsd09LwJRfJecqfTv758YSH9nU408yfLpYDnuOPf1brYOCzko2rRKKoKBi5at6BI2Yc6KXBqOSVQt/dgZbNefbgMOjpFk8T0/AhRCyJCE0e/95cHDKWTub8zbhcQ9NwJO37zV5cqMZS3PDzBtiq2Bhd18jB8X9yaMTBF X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: F9MCGwlAHRKRCV3IFhqe28EbfAH1WSNXI3eSZxrhMKCwW89vkPFd5kCRrZv+lI0UTFXR17kaJKKNXeiOlW8q2HwzEoXJ0VbeRlPGY4ZnI3IMkeBX2OjbLioGVi8oUkD1Sm8M9TUsx8Dee24EkKXXg6wHdlnif5dV5nXOnc5m4lsDdIGry/T72ikKyQefgoRIHG0hz1yD9NohjlvlwkYDOrq5e0MuXu/eeHgtaNmFibMJbojajU4E5lOYrWq/0aMSXgopeAqZ+QqUXEF3zU8cNNrpLiXdOiz2h/gXDok3U/C1AEqN9rH7gQ1Z60vAzq4V95w5Kg/ltueZ17185EQI6l+sz+5cAqlchYXlr7PmDFu7jOHmcSYm8bKenb2h38U8wISThMNrlIRqG8M0JGRHTiIAjVu0POjfoSzmgaOypjH9xANwnwMCdf9dzWXeNsKs6KW+iEiqdPFF5yro7ZX2iGarp59MXUI1ROMd6tl2mjRhqzrh6Kkh46IHYelRMbVjF8LAvl2ir2SSpdFq/6puRCDa25uts5k2BsZEtUqlVRGgwf6gOCpNQH6u3Sh5n4GuAV0y99RX7OPfkzFR6TDQUlxxTCfoOypL5yqsD4JwKXk+6zTMKfNybuQzuzhYMVGKMBckix56CHlxIPOIY9MWEiUh/92eQD9Wt27J4HNuDm8J1Bwv6WpuDWw5mAy6CCLwi+wyWjEkP71A6JNlSys09lOTrHU+tsOm/2rxdVZWi0wAtXxF+YrX05bVGRMvuVR61J6OcAu8bqgumCwL1QJupbpe1xcvov50QDpVu1O12BGxK8Z5xOg0nQo9uRRBfkjDiBLC8AO7Qg7Cay42/t7CDbfVysFKcQpVC7vGwrljFvBAKuZmOzd573RiEVTyfF8jf+39jEoWMIofwUcwRUIVAoyWIhafuKOUTQnngQQm7p4Haf3sXRvJj+vUv7Z0ENsCLjSCdd6wHF7X0IlP9hm8z2CSeHiTV8djeNhvw6+F93at6JbgA9SCvOHSzDZ1BBhTajgX3HFnMP+PmT6lqz/IoIxoS+jEzEWyDr5P8Nbrk+wywRwGrDd8LWTVknDcy9c32e/5P2MMZKoa/ZmDmWqeuNOrUl/mQs1mvjlPb1Y5eH1U2uQNbx/buWWgR3nuGrIiun8U2bK1lReFa+ViPLlk+Pcmuf9JFVHm9kpsg6iu5YuXi9BVD4nGtBYdJAOWGdpa3UfNIz8sblmOkcQnrfvRNPDjN6ESXsvgsSa4l0RO0PuisE58KTLY5bLgvsAeDh8u0sWBpnNoKNQz5BLS2Qd1/jTIzGMhqxEpGuhCWxG6bneayNwshSrw4RSaFH7zkxL3Gi/ogcHbrWiNyLCwPQsklJ96/l7sGxb17iGhIW/8NLlQLg9jER3oFn6Hrh3gxV3X2zBjJSJGAr1cJQLLAKlm8kGfEekhaFkYkOfW6MwRAxxl3dzwAfy2WQsoefnCBJcJovfs8F5DCOhwvRm0+gtWP1SIF56+6QAZ4SipFZoB/mmZ9MQULhuLDsDP7YeyWjfFibU0dIxDFKMgpXRw9Jv+FLAeQDSVjpC9GYZ3Rdd2ke0UJbCdcBU7YokreytQxOC5 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: e0ae25c3-1f30-415a-dc76-08dd50a3105f X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:05:44.3912 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: zCtp2IgjKlPb3EA56ZODnVCE4nTuWRpBvfs9Y0IRDkRL82UiRcm/7dxRxLKTAjytTSYqmlkHXhiDSXVuwSadog== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 GUP uses pXX_devmap() calls to see if it needs to a get a reference on the associated pgmap data structure to ensure the pages won't go away. However it's a driver responsibility to ensure that if pages are mapped (ie. discoverable by GUP) that they are not offlined or removed from the memmap so there is no need to hold a reference on the pgmap data structure to ensure this. Furthermore mappings with PFN_DEV are no longer created, hence this effectively dead code anyway so can be removed. Signed-off-by: Alistair Popple --- include/linux/huge_mm.h | 3 +- mm/gup.c | 162 +---------------------------------------- mm/huge_memory.c | 40 +---------- 3 files changed, 5 insertions(+), 200 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index e57e811..22bc207 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -444,9 +444,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio) return folio_order(folio) >= HPAGE_PMD_ORDER; } -struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, int flags, struct dev_pagemap **pgmap); - vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); extern struct folio *huge_zero_folio; diff --git a/mm/gup.c b/mm/gup.c index e504065..18dfb27 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -678,31 +678,9 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, return NULL; pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - - if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && - pud_devmap(pud)) { - /* - * device mapped pages can only be returned if the caller - * will manage the page reference count. - * - * At least one of FOLL_GET | FOLL_PIN must be set, so - * assert that here: - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pudp, flags & FOLL_WRITE); - - ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap); - if (!ctx->pgmap) - return ERR_PTR(-EFAULT); - } - page = pfn_to_page(pfn); - if (!pud_devmap(pud) && !pud_write(pud) && - gup_must_unshare(vma, flags, page)) + if (!pud_write(pud) && gup_must_unshare(vma, flags, page)) return ERR_PTR(-EMLINK); ret = try_grab_folio(page_folio(page), 1, flags); @@ -861,8 +839,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, page = vm_normal_page(vma, address, pte); /* - * We only care about anon pages in can_follow_write_pte() and don't - * have to worry about pte_devmap() because they are never anon. + * We only care about anon pages in can_follow_write_pte(). */ if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, page, vma, flags)) { @@ -870,18 +847,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, goto out; } - if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) { - /* - * Only return device mapping pages in the FOLL_GET or FOLL_PIN - * case since they are only valid while holding the pgmap - * reference. - */ - *pgmap = get_dev_pagemap(pte_pfn(pte), *pgmap); - if (*pgmap) - page = pte_page(pte); - else - goto no_page; - } else if (unlikely(!page)) { + if (unlikely(!page)) { if (flags & FOLL_DUMP) { /* Avoid special (like zero) pages in core dumps */ page = ERR_PTR(-EFAULT); @@ -963,14 +929,6 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) return no_page_table(vma, flags, address); - if (pmd_devmap(pmdval)) { - ptl = pmd_lock(mm, pmd); - page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); - spin_unlock(ptl); - if (page) - return page; - return no_page_table(vma, flags, address); - } if (likely(!pmd_leaf(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); @@ -2889,7 +2847,7 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, int *nr) { struct dev_pagemap *pgmap = NULL; - int nr_start = *nr, ret = 0; + int ret = 0; pte_t *ptep, *ptem; ptem = ptep = pte_offset_map(&pmd, addr); @@ -2913,16 +2871,7 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, if (!pte_access_permitted(pte, flags & FOLL_WRITE)) goto pte_unmap; - if (pte_devmap(pte)) { - if (unlikely(flags & FOLL_LONGTERM)) - goto pte_unmap; - - pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); - if (unlikely(!pgmap)) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - goto pte_unmap; - } - } else if (pte_special(pte)) + if (pte_special(pte)) goto pte_unmap; VM_BUG_ON(!pfn_valid(pte_pfn(pte))); @@ -2993,91 +2942,6 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, } #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ -#if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) -static int gup_fast_devmap_leaf(unsigned long pfn, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, int *nr) -{ - int nr_start = *nr; - struct dev_pagemap *pgmap = NULL; - - do { - struct folio *folio; - struct page *page = pfn_to_page(pfn); - - pgmap = get_dev_pagemap(pfn, pgmap); - if (unlikely(!pgmap)) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - - folio = try_grab_folio_fast(page, 1, flags); - if (!folio) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - folio_set_referenced(folio); - pages[*nr] = page; - (*nr)++; - pfn++; - } while (addr += PAGE_SIZE, addr != end); - - put_dev_pagemap(pgmap); - return addr == end; -} - -static int gup_fast_devmap_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - unsigned long fault_pfn; - int nr_start = *nr; - - fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) - return 0; - - if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - return 0; - } - return 1; -} - -static int gup_fast_devmap_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - unsigned long fault_pfn; - int nr_start = *nr; - - fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) - return 0; - - if (unlikely(pud_val(orig) != pud_val(*pudp))) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - return 0; - } - return 1; -} -#else -static int gup_fast_devmap_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - BUILD_BUG(); - return 0; -} - -static int gup_fast_devmap_pud_leaf(pud_t pud, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - BUILD_BUG(); - return 0; -} -#endif - static int gup_fast_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) @@ -3092,13 +2956,6 @@ static int gup_fast_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, if (pmd_special(orig)) return 0; - if (pmd_devmap(orig)) { - if (unlikely(flags & FOLL_LONGTERM)) - return 0; - return gup_fast_devmap_pmd_leaf(orig, pmdp, addr, end, flags, - pages, nr); - } - page = pmd_page(orig); refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr); @@ -3139,13 +2996,6 @@ static int gup_fast_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, if (pud_special(orig)) return 0; - if (pud_devmap(orig)) { - if (unlikely(flags & FOLL_LONGTERM)) - return 0; - return gup_fast_devmap_pud_leaf(orig, pudp, addr, end, flags, - pages, nr); - } - page = pud_page(orig); refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr); @@ -3184,8 +3034,6 @@ static int gup_fast_pgd_leaf(pgd_t orig, pgd_t *pgdp, unsigned long addr, if (!pgd_access_permitted(orig, flags & FOLL_WRITE)) return 0; - BUILD_BUG_ON(pgd_devmap(orig)); - page = pgd_page(orig); refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 468e8ea..a87f7a2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1648,46 +1648,6 @@ void touch_pmd(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pmd(vma, addr, pmd); } -struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pmd_pfn(*pmd); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pmd_lockptr(mm, pmd)); - - if (flags & FOLL_WRITE && !pmd_write(*pmd)) - return NULL; - - if (pmd_present(*pmd) && pmd_devmap(*pmd)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PMD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - ret = try_grab_folio(page_folio(page), 1, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) From patchwork Wed Feb 19 05:04:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981496 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2052.outbound.protection.outlook.com [40.107.220.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B22201B6547; Wed, 19 Feb 2025 05:05:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.52 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941553; cv=fail; b=Q3yGOFehos+gHsB6pApcYDztkhkj1XTUm9FbQHZleNEi4R350HxGYYaBGdhSAlN2HC8VXFdrf9hqSESKB+g2zXA4bLLBtGIc0mqem8aBsSHWsqCoKDvHPgh+SV9SyRuBqT13a+XTEgJZ3akdgF+SCx7EOy3H1TNhI907qFNUt5Y= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941553; c=relaxed/simple; bh=jBMOpRBgZbiwmnQ7uYsnmLZ2dIhNSAcex9L6vbC72Es=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=FkcfD+yeMaKQapXhJeKXrj2Y2oE58kgttgMPs+QSN44MQ21L4Gho+KdS5Su7Pre10yKHr5SuwRoChRL/qL5UQvo20y/QNdRRj7+noWVSJuTex+RDaqeRa/ufkSxJThfno8Q9lHSXGt0ScPV77xFeDkFpm+PkK/LIriRqHFtbhHk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=BHH5OG5Y; arc=fail smtp.client-ip=40.107.220.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="BHH5OG5Y" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=fzWkJ6kik3u4orH/x41YtYWAiGtaMr6heYQFjQnbS/e27bvEOSHYzZwKWip+IDSY3ZGXEynxxIyyIhdWK6xbVoNKUtjAZx3ZgUMakcf4uyyZbV0syvhC3hBrfJs53fiM6LcszT6ReJGFrDa+rFZXIuzuuuswTCYuPtSfX3N3YcVAH6o/OQyZQD0IFkiYe/nL5Ag6Kcir840gmF7Hxt2z3Lo1uaFRJIsyIrlwBr2CrN1WMXyCsk0HAG25U86W+nWnJjdotTmujK4hhtHf89RD56t8iJABCoNLxcDCYCZH5wk8J1pJNwtQNSQEKEwqPqEtQy3r916E/B0wAEz5kgf3Rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1dAx8pnEU1f77OPUGYADbd/A0GQHIafKOSbQPdUpWFQ=; b=R3rwFUAhnuAnDtS/ZApE+Lm2b/U937et8/CXV+HETezx0tEQFVYPTviL5TimYRVZLpgm8jJYkV5cYCnBySjCNOm6jQ4t+M3OGheR00qH2oD0P6Z4/L96iTXAKaSywyE0Sg+EYg2pbM/LC0yoAZGEa05fQhR9EeOd7DiuE+czxF6vipoBqA6Z9Zf2hreVyfBhRoFDXQXdYxTRvEO7DTNE1e2sWGFukRR82/YMH0jUa3J4QttFljddItZMlEEzY5IgoXM18/3VUh0dD6yYcFzOCczl2Oni4D16SvA7aPqs01Vqce15Ry0Xm6pkkCn4ZOVOGryTmKk7isDVTu6fSwBXBA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1dAx8pnEU1f77OPUGYADbd/A0GQHIafKOSbQPdUpWFQ=; b=BHH5OG5Y34ZVsll40EIGZo/UHhGubwytKrlHrU1NIq+jMUHlVKZqLNuf1xzeQiiH21SAB2SORHso2mutsPxyx/0vwk1Tqphwps9b5p1MFSl1nX7K4Q0zaQuEt1rOgwd+8PikWj4dW2LP32oLR7nYLFK2Pz5MrFd/x3X/Dg7iZwNLhImjfeE3n3NX43gCL4ed5bLws+/sq/gBv8SWKddmx8K5XS+ciVwdLMYysBJWTkSYcxxfBIq5snwQof5Qcx3gyQykVSsxC1Azbj87LRCe7JFLuI5l5cQ+261S1fTiLwhhEXdk3wRiT1YUOfOehJT1eK3ZdqyDRiRFXbSH1f/53w== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:05:49 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:05:49 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 07/12] mm: Remove redundant pXd_devmap calls Date: Wed, 19 Feb 2025 16:04:51 +1100 Message-ID: <5d4a957ae0357641eb03b2a1b45b3d8af94030c7.1739941374.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0091.ausprd01.prod.outlook.com (2603:10c6:10:111::6) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: d7490c23-dafe-4a40-ce24-08dd50a3134f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: gVJCSqZVfTEWMqoZgQnK4sXKvaAn3yU3IWMMq/ryRzhC6QbeTezfpdbsZoQVH7qRmhVUSHtXqgu0O1V0i2ASNXfpaNgbuwol5cigQqaWR8uAlQAUz458Or0xsn8OGg8Cc8OdKBC3qUsXomc+zDD8Ui5jEo+tu5FSqq2Dc/7JwzIdlCkVWIrEfxV2KT5oqD9B+RIRyE9kEl6FA7fWeNCrjFpF/xsWq/w8Fjis89nx3yJHYQ/0Ed4TmsH+qEysakpszRxQ7/H0uUPZvaRlCOdUYoRTrJ/ZPvhqdP0xizGoyYybJ8ZcfeoIzZR4pelRziQUifvEB8hD4zVETmJmrxzFdg391V3h82DYxQ6W3rzOgJlImIyMTZrrKcvxJ/9M/r8ua0JkFtLGI5RdRgIIrijDUGgucJeZhWvYSgQmZRCumOSHo4fz4exCDJ+qRjO+ZArwsyqE8H7VqeH21VuP7htCXlxNGsQFA5vCzL8goHjUcLz1vUKhenRzIDRMFxsIX5GASMUsb76rBjseCtoH4UwRYwy0Nmq7MfNAmT+uglTdFhL+3YlCaAkGHp8FdJ5IDv34JFJr5BQ6yN58ZWGAC2J7XY3vurYTlqfFSipWDNilDHgKwkokxIRMybQ1O6nwhG8/ltnC2utWGxDn03o1gqHmp0huANMmcRXvAbnhYuuGXkjtrn/alYckIZNVKySkQ6HBTo+1Pip+G/wbG8YeSZmgtZlprnpphlJ/N2iwPMZw+g/r6sHu+0iok+DQ6iKtzusinnqZkTaXGMkwFJ5MNGbys4lrHCXvJkCEug1ckPUxQBRxWcBCAFLsDlNqAvmsIGqQlHBK1+gLfiSqmPNExiXJ5Lu0jUysrYjqN+FMjICSi7/QqSbKkfXsENOlreLiKmJ7vRU4yNGsewLxvjFdCwzBmbyrECECsW2NyUYvhT0C/ak5lbtk7HUfl/dwuOuSktUALT80ScFUExmQP3ZAOheL2bluX1MO3RCKLJLukMVqeS+lPpK7V+AHPg0nQMf0ZYrnrJgKxhjt+qhyrKvUJ8VvTG5IIWWeyOCxHjbzii+3UfOcO30CJLyYQXSIzN11hoMs0EHC7z9gwPX3pSzYtHQlYsylup//oD2L8ktp6BuCayVjs0cLWlPBneexaIG4X55fwlzqyFNqLw1IQBBaOSI6kJfJbkW6sLQy6QHbeuEUmo/7YkQYBVeL885IG4rWIPveA4Zpet5d3gxXGdBy0SPSbdkOIZgxgWirhwyw2kb+jRMmSoBN8UkLhwD3HuipOSu1CdZ7B1z5dNQU0lnTMFdrE0AsnQHYNwaqmZbO4lGY3bGw6EK02QsWYBXbGCjcFMkvXKp3S1wES1jJuhIrEAaLYpU9noD1cwTyr3I5H3jJauKX3f4z+7c3bwmohbO14GYQ X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: DEbqJRrzCpAGCPhqVQHCga27cfej6gPmAgBm/xb6f1u1hcaRVrqnYkWynXFxRe/FRWGUlBSfskGHhxMFNlVLEqTmEDvp3zcTjRDg9xP2fqrQ5cyO7R38MACBZiiJ1Tq5wSVgqvo9EQKy3WzD51FP0sJaDvbJAU8GpGa6owwpv9ctn9XlSgfcQy/ixz9ziyXi2UHLpbZ5siU6bmeuhHi3TaDSQRa4sO2B37YCOwtS4BXCfqeIlUoYXgi8+wcOB3EANiXefdX5J274kl7W3wtkV02MjFA5ZI7oSXGCvNetdC5J8c+mn14Ec3x6D5Vo6vGEyjnMz2oUFjnq3lMq2mQAp2U202/DaQv4Fwh238aE8Cbhz6JCy8FdE2ys/CJQtobOfeV8my95nK7JTfp3236w/RPuFBbm0ImTSk6TZaXiQN5pqW5ZWJ+Og4E2W9jLyBkHVXpRbAu2lEO7XrLKkm1x96rCIE7uMhkCpoikbv/6PxN/3GFFvfByQl2n5dZivBUrOS63f62TXUiZatS8LaAas72Q7MI99mBhXGT/65+vUNF0IHvHzMFcZuoD0N6GSNubRniLTDwvGozuKAgN19vuvwPfKXVAlzOkiAr+DtILfVOHzgeKXpCckEDOf7IoIU3SBPEkmj2r7hpQWxdbRx1bumpkLfOXcJQSrnhmki6N68wqfgCx1j8AkpdsjA5fnGwCDuBpN/Wujn6O4uXuMaCBYC0whsA0rhDXsv2JwNzR+Zb6VzV2qjBbIZyzsZOsHkiN5eiTKCNMkzG3MJSbg1kVtzpwCW/EhpkT+e7DdmCmHTKoj9KBPYlwt/dkNzK/DIWLCBC7x/L0HyRmSIhWJ0a7u9PLdTEzN8GhWsDkykhMxBJEa0EUjKrJ/099IlJlTHNPJ1Z/j82etvIh+CQ2AfbJQ92HnZ/6BGGILCKMLQrws0EHMnrfXVmCBg8wlYns6HDMKPLQB9h+fZw4EHaqOlQE/6lhF69nX6v09vR29cJzmRBAIoisunkfOEbdq/TKvWOLdA2HO+Ck30AHQK1PxwJbUSQU6IpyX2tFXdDWHGrwj0qkQ0ijB6xE/yvrIiKTILwQsJdlzwEbM0YijRCwPiNxfGO6mnsop4/YX58pmNblRYOKxEuKlyFvIW/2f6SdTsCHZbp+x3om3EpBLIegt28lb/ovegTq4ytmrI7NuTsxj0cKcHIoCyVZkXuUFESxxNtpvfVkqU3o4uZGy8HQRp7gRE6oFTG9MQyHHh/5lBhTBuhPgXRubpyDvzd3ZgmPBcsGHREv9Jiel1g6XLTv+vkTIfT1hKAnCQxB5f7Gg/3jTnoClb8QhJpriznK8NniZYJlsFQU5so1grjg+AKrE4HshaR/ndmOlT7StUuzvoQAhFGlYbla8sm0Jsb96xCJwctNYSPO4S7TktFkY1xg8U+pnvrfG0MzTg75CkR/ayeVpv+UEpVK3kN4wihfln/HzIB6xWLyTuRAzgWzwkHpbMavDkemvbneWGZ37vEFPJ1083SLUnn37G68VFqAwzT46jGqvU4Tqe+lFUJqPwDpb80AxEmYSjf7YVjsythPUjlVM3GoocSA+esTk2KQMjsEuDtT X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: d7490c23-dafe-4a40-ce24-08dd50a3134f X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:05:49.2577 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Hs4MQ7FL+VQEYEUmSWsfP0s62w7oItTbYQL1ZPSNXCVbbna4YjuMYNPSlztGZ57l3YLVhwClgJh0CzkJRiVTAg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 DAX was the only thing that created pmd_devmap and pud_devmap entries however it no longer does as DAX pages are now refcounted normally and pXd_trans_huge() returns true for those. Therefore checking both pXd_devmap and pXd_trans_huge() is redundant and the former can be removed without changing behaviour as it will always be false. Signed-off-by: Alistair Popple --- fs/dax.c | 5 ++--- include/linux/huge_mm.h | 10 ++++------ include/linux/pgtable.h | 2 +- mm/hmm.c | 4 ++-- mm/huge_memory.c | 31 +++++++++---------------------- mm/mapping_dirty_helpers.c | 4 ++-- mm/memory.c | 15 ++++++--------- mm/migrate_device.c | 2 +- mm/mprotect.c | 2 +- mm/mremap.c | 5 ++--- mm/page_vma_mapped.c | 5 ++--- mm/pagewalk.c | 8 +++----- mm/pgtable-generic.c | 7 +++---- mm/userfaultfd.c | 4 ++-- mm/vmscan.c | 3 --- 15 files changed, 40 insertions(+), 67 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index cf96f3d..e26fb6b 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1932,7 +1932,7 @@ static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, * the PTE we need to set up. If so just return and the fault will be * retried. */ - if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) { + if (pmd_trans_huge(*vmf->pmd)) { ret = VM_FAULT_NOPAGE; goto unlock_entry; } @@ -2053,8 +2053,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, * the PMD we need to set up. If so just return and the fault will be * retried. */ - if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) && - !pmd_devmap(*vmf->pmd)) { + if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd)) { ret = 0; goto unlock_entry; } diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 22bc207..f427053 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -370,8 +370,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, #define split_huge_pmd(__vma, __pmd, __address) \ do { \ pmd_t *____pmd = (__pmd); \ - if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd) \ - || pmd_devmap(*____pmd)) \ + if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd)) \ __split_huge_pmd(__vma, __pmd, __address, \ false, NULL); \ } while (0) @@ -397,8 +396,7 @@ change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, #define split_huge_pud(__vma, __pud, __address) \ do { \ pud_t *____pud = (__pud); \ - if (pud_trans_huge(*____pud) \ - || pud_devmap(*____pud)) \ + if (pud_trans_huge(*____pud)) \ __split_huge_pud(__vma, __pud, __address); \ } while (0) @@ -421,7 +419,7 @@ static inline int is_swap_pmd(pmd_t pmd) static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) return __pmd_trans_huge_lock(pmd, vma); else return NULL; @@ -429,7 +427,7 @@ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { - if (pud_trans_huge(*pud) || pud_devmap(*pud)) + if (pud_trans_huge(*pud)) return __pud_trans_huge_lock(pud, vma); else return NULL; diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 94d267d..00e4a06 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1635,7 +1635,7 @@ static inline int pud_trans_unstable(pud_t *pud) defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) pud_t pudval = READ_ONCE(*pud); - if (pud_none(pudval) || pud_trans_huge(pudval) || pud_devmap(pudval)) + if (pud_none(pudval) || pud_trans_huge(pudval)) return 1; if (unlikely(pud_bad(pudval))) { pud_clear_bad(pud); diff --git a/mm/hmm.c b/mm/hmm.c index 9e43008..5037f98 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -348,7 +348,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); } - if (pmd_devmap(pmd) || pmd_trans_huge(pmd)) { + if (pmd_trans_huge(pmd)) { /* * No need to take pmd_lock here, even if some other thread * is splitting the huge pmd we will get that event through @@ -359,7 +359,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, * values. */ pmd = pmdp_get_lockless(pmdp); - if (!pmd_devmap(pmd) && !pmd_trans_huge(pmd)) + if (!pmd_trans_huge(pmd)) goto again; return hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a87f7a2..1962b8e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1400,10 +1400,7 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, } entry = pmd_mkhuge(pfn_t_pmd(pfn, prot)); - if (pfn_t_devmap(pfn)) - entry = pmd_mkdevmap(entry); - else - entry = pmd_mkspecial(entry); + entry = pmd_mkspecial(entry); if (write) { entry = pmd_mkyoung(pmd_mkdirty(entry)); entry = maybe_pmd_mkwrite(entry, vma); @@ -1443,8 +1440,6 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) * but we need to be consistent with PTEs and architectures that * can't support a 'special' bit. */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && - !pfn_t_devmap(pfn)); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == (VM_PFNMAP|VM_MIXEDMAP)); BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); @@ -1537,10 +1532,7 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, } entry = pud_mkhuge(pfn_t_pud(pfn, prot)); - if (pfn_t_devmap(pfn)) - entry = pud_mkdevmap(entry); - else - entry = pud_mkspecial(entry); + entry = pud_mkspecial(entry); if (write) { entry = pud_mkyoung(pud_mkdirty(entry)); entry = maybe_pud_mkwrite(entry, vma); @@ -1571,8 +1563,6 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) * but we need to be consistent with PTEs and architectures that * can't support a 'special' bit. */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && - !pfn_t_devmap(pfn)); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == (VM_PFNMAP|VM_MIXEDMAP)); BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); @@ -1799,7 +1789,7 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, ret = -EAGAIN; pud = *src_pud; - if (unlikely(!pud_trans_huge(pud) && !pud_devmap(pud))) + if (unlikely(!pud_trans_huge(pud))) goto out_unlock; /* @@ -2653,8 +2643,7 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { spinlock_t *ptl; ptl = pmd_lock(vma->vm_mm, pmd); - if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || - pmd_devmap(*pmd))) + if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd))) return ptl; spin_unlock(ptl); return NULL; @@ -2671,7 +2660,7 @@ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) spinlock_t *ptl; ptl = pud_lock(vma->vm_mm, pud); - if (likely(pud_trans_huge(*pud) || pud_devmap(*pud))) + if (likely(pud_trans_huge(*pud))) return ptl; spin_unlock(ptl); return NULL; @@ -2723,7 +2712,7 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, VM_BUG_ON(haddr & ~HPAGE_PUD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PUD_SIZE, vma); - VM_BUG_ON(!pud_trans_huge(*pud) && !pud_devmap(*pud)); + VM_BUG_ON(!pud_trans_huge(*pud)); count_vm_event(THP_SPLIT_PUD); @@ -2756,7 +2745,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, (address & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl = pud_lock(vma->vm_mm, pud); - if (unlikely(!pud_trans_huge(*pud) && !pud_devmap(*pud))) + if (unlikely(!pud_trans_huge(*pud))) goto out; __split_huge_pud_locked(vma, pud, range.start); @@ -2829,8 +2818,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); - VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd) - && !pmd_devmap(*pmd)); + VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)); count_vm_event(THP_SPLIT_PMD); @@ -3047,8 +3035,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, * require a folio to check the PMD against. Otherwise, there * is a risk of replacing the wrong folio. */ - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || - is_pmd_migration_entry(*pmd)) { + if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd)) { if (folio && folio != pmd_folio(*pmd)) return; __split_huge_pmd_locked(vma, pmd, address, freeze); diff --git a/mm/mapping_dirty_helpers.c b/mm/mapping_dirty_helpers.c index 2f8829b..208b428 100644 --- a/mm/mapping_dirty_helpers.c +++ b/mm/mapping_dirty_helpers.c @@ -129,7 +129,7 @@ static int wp_clean_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long end, pmd_t pmdval = pmdp_get_lockless(pmd); /* Do not split a huge pmd, present or migrated */ - if (pmd_trans_huge(pmdval) || pmd_devmap(pmdval)) { + if (pmd_trans_huge(pmdval)) { WARN_ON(pmd_write(pmdval) || pmd_dirty(pmdval)); walk->action = ACTION_CONTINUE; } @@ -152,7 +152,7 @@ static int wp_clean_pud_entry(pud_t *pud, unsigned long addr, unsigned long end, pud_t pudval = READ_ONCE(*pud); /* Do not split a huge pud */ - if (pud_trans_huge(pudval) || pud_devmap(pudval)) { + if (pud_trans_huge(pudval)) { WARN_ON(pud_write(pudval) || pud_dirty(pudval)); walk->action = ACTION_CONTINUE; } diff --git a/mm/memory.c b/mm/memory.c index a527c70..296ef2c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -682,8 +682,6 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, } } - if (pmd_devmap(pmd)) - return NULL; if (is_huge_zero_pmd(pmd)) return NULL; if (unlikely(pfn > highest_memmap_pfn)) @@ -1226,8 +1224,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pmd = pmd_offset(src_pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd) - || pmd_devmap(*src_pmd)) { + if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, @@ -1263,7 +1260,7 @@ copy_pud_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pud = pud_offset(src_p4d, addr); do { next = pud_addr_end(addr, end); - if (pud_trans_huge(*src_pud) || pud_devmap(*src_pud)) { + if (pud_trans_huge(*src_pud)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PUD_SIZE, src_vma); @@ -1788,7 +1785,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) { if (next - addr != HPAGE_PMD_SIZE) __split_huge_pmd(vma, pmd, addr, false, NULL); else if (zap_huge_pmd(tlb, vma, pmd, addr)) { @@ -1830,7 +1827,7 @@ static inline unsigned long zap_pud_range(struct mmu_gather *tlb, pud = pud_offset(p4d, addr); do { next = pud_addr_end(addr, end); - if (pud_trans_huge(*pud) || pud_devmap(*pud)) { + if (pud_trans_huge(*pud)) { if (next - addr != HPAGE_PUD_SIZE) { mmap_assert_locked(tlb->mm); split_huge_pud(vma, pud, addr); @@ -6000,7 +5997,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, pud_t orig_pud = *vmf.pud; barrier(); - if (pud_trans_huge(orig_pud) || pud_devmap(orig_pud)) { + if (pud_trans_huge(orig_pud)) { /* * TODO once we support anonymous PUDs: NUMA case and @@ -6041,7 +6038,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, pmd_migration_entry_wait(mm, vmf.pmd); return 0; } - if (pmd_trans_huge(vmf.orig_pmd) || pmd_devmap(vmf.orig_pmd)) { + if (pmd_trans_huge(vmf.orig_pmd)) { if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) return do_huge_pmd_numa_page(&vmf); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 6771893..49c3984 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -599,7 +599,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, pmdp = pmd_alloc(mm, pudp, addr); if (!pmdp) goto abort; - if (pmd_trans_huge(*pmdp) || pmd_devmap(*pmdp)) + if (pmd_trans_huge(*pmdp)) goto abort; if (pte_alloc(mm, pmdp)) goto abort; diff --git a/mm/mprotect.c b/mm/mprotect.c index 1444878..4aec7b2 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -376,7 +376,7 @@ static inline long change_pmd_range(struct mmu_gather *tlb, goto next; _pmd = pmdp_get_lockless(pmd); - if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd) || pmd_devmap(_pmd)) { + if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd)) { if ((next - addr != HPAGE_PMD_SIZE) || pgtable_split_needed(vma, cp_flags)) { __split_huge_pmd(vma, pmd, addr, false, NULL); diff --git a/mm/mremap.c b/mm/mremap.c index cff7f55..e9cfb0b 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -633,7 +633,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr); if (!new_pud) break; - if (pud_trans_huge(*old_pud) || pud_devmap(*old_pud)) { + if (pud_trans_huge(*old_pud)) { if (extent == HPAGE_PUD_SIZE) { move_pgt_entry(HPAGE_PUD, vma, old_addr, new_addr, old_pud, new_pud, need_rmap_locks); @@ -655,8 +655,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (!new_pmd) break; again: - if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || - pmd_devmap(*old_pmd)) { + if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd)) { if (extent == HPAGE_PMD_SIZE && move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd, new_pmd, need_rmap_locks)) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 32679be..614150d 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -241,8 +241,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = pmdp_get_lockless(pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde) || - (pmd_present(pmde) && pmd_devmap(pmde))) { + if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; if (!pmd_present(pmde)) { @@ -257,7 +256,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); return true; } - if (likely(pmd_trans_huge(pmde) || pmd_devmap(pmde))) { + if (likely(pmd_trans_huge(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); if (!check_pmd(pmd_pfn(pmde), pvmw)) diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 0dfb9c2..cca170f 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -143,8 +143,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, * We are ONLY installing, so avoid unnecessarily * splitting a present huge page. */ - if (pmd_present(*pmd) && - (pmd_trans_huge(*pmd) || pmd_devmap(*pmd))) + if (pmd_present(*pmd) && pmd_trans_huge(*pmd)) continue; } @@ -210,8 +209,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, * We are ONLY installing, so avoid unnecessarily * splitting a present huge page. */ - if (pud_present(*pud) && - (pud_trans_huge(*pud) || pud_devmap(*pud))) + if (pud_present(*pud) && pud_trans_huge(*pud)) continue; } @@ -872,7 +870,7 @@ struct folio *folio_walk_start(struct folio_walk *fw, * TODO: FW_MIGRATION support for PUD migration entries * once there are relevant users. */ - if (!pud_present(pud) || pud_devmap(pud) || pud_special(pud)) { + if (!pud_present(pud) || pud_special(pud)) { spin_unlock(ptl); goto not_found; } else if (!pud_leaf(pud)) { diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 5a882f2..567e2d0 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -139,8 +139,7 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, { pmd_t pmd; VM_BUG_ON(address & ~HPAGE_PMD_MASK); - VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && - !pmd_devmap(*pmdp)); + VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp)); pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return pmd; @@ -153,7 +152,7 @@ pud_t pudp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, pud_t pud; VM_BUG_ON(address & ~HPAGE_PUD_MASK); - VM_BUG_ON(!pud_trans_huge(*pudp) && !pud_devmap(*pudp)); + VM_BUG_ON(!pud_trans_huge(*pudp)); pud = pudp_huge_get_and_clear(vma->vm_mm, address, pudp); flush_pud_tlb_range(vma, address, address + HPAGE_PUD_SIZE); return pud; @@ -293,7 +292,7 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) *pmdvalp = pmdval; if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval))) goto nomap; - if (unlikely(pmd_trans_huge(pmdval) || pmd_devmap(pmdval))) + if (unlikely(pmd_trans_huge(pmdval))) goto nomap; if (unlikely(pmd_bad(pmdval))) { pmd_clear_bad(pmd); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index cc6dc18..38e88b1 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -794,8 +794,8 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, * (This includes the case where the PMD used to be THP and * changed back to none after __pte_alloc().) */ - if (unlikely(!pmd_present(dst_pmdval) || pmd_trans_huge(dst_pmdval) || - pmd_devmap(dst_pmdval))) { + if (unlikely(!pmd_present(dst_pmdval) || + pmd_trans_huge(dst_pmdval))) { err = -EEXIST; break; } diff --git a/mm/vmscan.c b/mm/vmscan.c index b7b4b7f..463d045 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3402,9 +3402,6 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned if (!pmd_present(pmd) || is_huge_zero_pmd(pmd)) return -1; - if (WARN_ON_ONCE(pmd_devmap(pmd))) - return -1; - if (!pmd_young(pmd) && !mm_has_notifiers(vma->vm_mm)) return -1; From patchwork Wed Feb 19 05:04:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981497 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2069.outbound.protection.outlook.com [40.107.220.69]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E2071C3BF1; Wed, 19 Feb 2025 05:05:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.69 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941557; cv=fail; b=p2SK5JIEz/UgFAF9AVSg7r/+06KmBg/VmnbuMPlOgtkXCJ6w8gVEV78NFjFoLB61UFAhVMYDyNSGHSBvtTDMnCHv7mX8XzIVhuedqybhrAEHB6UY6mCttf8zEj4oEAAwHZjIsdOz5rKQPAgiOpDjZ2qWrXmr6cNNdLDLXAwyCfQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941557; c=relaxed/simple; bh=yWHLNgtQNlaS61f3/KJ7ivaM/AaJgdKI+ZRPfwp1Vdg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=u6WJu7ktNQLMj3Lm3X2uNQb2W1mNdRKWJ6sxzaOZsQYNAyVrh65aZbgYlbcLni8tiJw0frrOaxEOTveMEwEqoVTvwP+ywujLswaPlXnEzJiiTJH1ZO3sHLBFm94Lkg/dnYhH25DWu6WyaHMqd3WoW2qhlVNUcLuFRog2zpBZeHA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=EYTXz3AJ; arc=fail smtp.client-ip=40.107.220.69 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="EYTXz3AJ" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=GoUVrqn8vVRG+phV/b/Ju7ts2ZGkFEZANTnn9io204sOwNq2I0IMm4e04v7RFKpF1jX20zG5fzhZRMGtwO3QklpAem212E8vUAfHX+tOrvQ2aK1bYLsZGk9wlWeLiMlY8M2rnfdbKox3ajXRRyNIBQQJ+VT2zAUw0mMwSjc4apLoVN4YSqTC0mVUgMmMPHJ2+VPpV5NL25Tv7fzKBm6qWarmaB6QNsmqtB8a0X0kVI9qMARIT5KBFNsYo4R5T/BHGe9qu1e/yVF1abROj4gJdPG/RZ8BBsKlrXZ4pRE8ifzJgfFRbUzJ/Ye/CIECuVTi3A8fxCmycisHFEfWUiiAXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=kORq89LHormmx8FvzXCjMRaP2CQO/gpnN8I6Y1bsFTg=; b=kz73IgxwVzqu5/xyR9Qa1Bk3MMp4kITUo50FB6AyeYFzpOW2/N1zPH/YZDfA0y5lXgsYpZwHXRz5HoExUvmUacFw4PaiMBLYgYOiRfTIvZrpj2f7RbyFC9oYZS0658lbglkQb/uifv5pqHkIFifCzQ0svT4gbzNPWr3IHQm7oKb+QQzAn8dwYrAO0HTkkmhjm4mVQJ8AUzPGKRxhIlu/KTmWVcNg033cGWMDX3jZ+AIDpxgeezc5ndnAffIi1kNsKBECOABOPVmRKyaU7IsMVPR/+5Z1nStKulVO0Hm4XwQEFMP4zOC+GcfZ/Cb0wngbKmDKtYDUeSyzIF4vohyJyA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kORq89LHormmx8FvzXCjMRaP2CQO/gpnN8I6Y1bsFTg=; b=EYTXz3AJSyHLd5Ft26BcmgyAiE4qHbFCs6SKQ3YznJOwCP1k5D1l7YyQdSIGOlhXtVpMUERoJE1uJ3L5+GcaNZrDLd6+zdiNH1UZr2rPgu87qahTROBT8a5+rmqpDpgIJyrBkMwY49gpfs4m1RbAV33QweTKQ6PNtV04sh59MGzZcYIYtbHRrgfGwnJtQQDmFfsyyaiEqilv/3PoPudxUt6AR1qbNeVTrcgB30Xop7B0Sl5wX4LpUmeZQu2KJtr+PQXRvf14EOyPuWvWl3MmohHOSsC5+DobdH+/BsLVn+U9hxxL+RI/vI1AYg0/vYuFAS/BNNUhZ5GkyMfyvwjS2A== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:05:54 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:05:54 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 08/12] mm/khugepaged: Remove redundant pmd_devmap() check Date: Wed, 19 Feb 2025 16:04:52 +1100 Message-ID: <02c0a4d9ea3f0470e2af485296b6ff4f1f4a87e3.1739941374.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SY5P282CA0037.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:206::19) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: dd0e8149-d2a4-464c-7316-08dd50a31653 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: QiVIWiaVa+B02Az6jjCbmyKxPwE7d6SVEHQf5Fp4AGfqZMxXeEstWm2QllMPWg/OTMIgcdqFBReVi2vldKzI26ahqBIBHOxquTp1fROvaXmBvE3XYuN1zufW3z+FLOhulNUIxpK60n/5Bm4Qaw6g1dDkf6Dvvdod+31gK1kDjiCUQYkfPGH66f/EXNcLQY84FfjgQVzZ3nFRNLCjhXD5wxQ3XVLpN0QiNIcCUjOEx/ZJj9gYqY6UXPWhMRiVzHZhC6Qe5BMt1qZJPVto6UzAoYPEaMOu3QURK3Ue1KiSmddyZKyuxoHsXpjKPWnQaMz74swK5uzqfA3b+mcNj0oFmglGzHYtUqD5VFSPKA+YbDEiiR/Jd+SQHNYfhMUZz6I0nOvIRfUiCzFV4DTuBcU2gJHY3lRmYwDMDtlodVzDgsBJR3jfK0+YW58hfMiv1jqxRNhAe7rvR1CI7GeI6U5v1WsqBWn3+tl88uhGfwhd+W+ldDkQhB7fIfTfs+anSOrWfImY4CkjBctfN8dHT1il1EEL0QDuwqdHLa4e2oqdJt+W1XYs0k5OFkgJmkcTYtpofqjnECHSVqcXE7cQByGLZwe21bNqltDfQutAG1rzsuXJXpgk2mq+w7n0DPoMwd9UwI2atfKS6dmRcCtJqi+/JGMTBHUrvkD9GScR7/GonmsWpSv4GKQKEWKw7VW8ouS+7tQd7RQhsuz0WtaBBYVAYV+UQCbIIqWIyk43ubVFptLErbi1REC2malmUy8I00HoLrhbdg9e96tu7NPIHbbtwqfvVadXHO52OIAp2NiaQfG0dUDVqOhMuk2mtspuOM8JwzwCk3GJDpPzFmjREROmqAjCkxwM9zpxpRk9fXeRbEmWQS19Y0aBUKmWuA6IWqKKoxhp4ov8LF0sXe7SJeqvJLriO8S82OfuUTK2zI9P7yGAqx4vvU8JdilXFUR6HiMAkVW8rudMOeof4sKKEkIpNI8H47P5Yinap+51PWDqRS5TQyNO7SZfC/HxhH/crM0RhwRtstjKG4WG3ZJQ7DHWa2Ur0lfI5dVTA/GnmkwT9whNhwKFBYUlVescJ7wX9F5PjDPnBXlJ+60Sw66qEbnj+FhBOnY1Pz7mVdjeHfY5CBJBaokc0YYhXge0POPQlepAMCSnStTamqoaIcQOpvnojlhNefMN4oPJjjgNeGvv3S0CDUuibKud5L1qdD2yiUIPt6gmLrtOv9SH9wOkCJy5jQPAQfp9buC7UopR8hd2Z1nvkP0lg6w4yOOpcxjMKRsI3vUhMRB9zq8CA5ub0Oq9JpR5LMDU9k7EBnhGOhTlUpF7JM+dJbhiOs0rHMAc0vUm/J2wil5IntmZEzHnOB65bVfn20Ki1kFp/ZVrFGi7vTRQv2Dv+zJAn/xB4tkaeiSY X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: /wDpHPto+L2/1l7Ryo/5BWpQ+fAbK0AhsxIlpU/kUJKI9Q0nyu4lM4eV7jmc2ioG+DTAX7u+5iFFVa8wdH8GTtBjHr6x0eU5TfGtkfssyW9/ANJUA/JjWBlb3xDB7wWuzmKPnl3yuY9v+8rrBXWqiYa8N5pCbjWFzeA5rnx3HR04WAtCWX/8+lkAL5dfPK00ztuyBuomT9QeLKMKMpIi79r+JOCKnaEh3LOgyPCT/saYxkMBtCf/y/OTzmSVhZXUEqoLYzjVjAgHTn67t+OKmbtUsbu2V0jGRhalaCmd1yZUnBAA6IYRbEgLxwX6jfgv2U2XXuQmIT7Plrx94AeOcG+vUgU7EmeGQaoA5d8lwo1gXuOMalrsIOpdJDMimeVdzFr2blgoJ504Z9lmfX5iuTZ5eAWIN7qJONM+EjD2bkpmFa90rtq6Y+EsJC5tmfdd72sAQ3G1TQF8mEg0gWIgiZ4SmJStXqHBA8lfvi0WW/o3nuGW0O1riv53nJP+zoE3rNZN0JR91WYEll0qkoDo4+CMemQ4WsZWRfdAQtR6NejLsZNFdseGTsgtMB9zXNuLhS8tsIEWRO7acg5asbNlitmrAYZsqldbpBrebp0uIRx2K0v3i/aX2mCZddPjU0lux885Xl1mg0EQi03OQ+D9JhFrDx4gJk/Qzb0u+35UiuMDeDvErjEjWZ9p/5YOYtEXytJ4hWNXKToH8GxJLoLHRe2OfEje72JiQ70u5O0fap3SEwzJw7k0vnX9MxKNUX13RASnZvssnQjveP2DGl6a+diDpXi19eBubDx72W8w7isKZeCFwiSHpU5cNAPDncwO492eyMo3FW5b5F6dqjLANJuyb0EZP5FrOa7XRJny3OeOW0HXXsthOyH2N2AADKAF6FSiuB+UajXLoc63KepnSGgMgSZGChs2LlBeeKreZOVqDa+YTwevyT3kU/lksXEXRvXfuHT3t3XGc9NNAMPYJxAZFF4r+YL2DAXxaDLlpMcRyG7E6zwoHv1fi7HlQYuM3jzNOnFQbIuuC8rXPctue3zQ7zd4PfNsmrIWC4fuK8UcgOQUpQk/maUSHGlhgo3QAfj0xsw7D74GMP2/Fhci+frnxeTfvGpla4wCMfDkb0Bu62ffzdKD5qRRXcfUvYuL6O4okkg+lWUSF9RAiOJNk24GY0cN/fInurS4Ik575SH5+SEMLuc+o7khKpW8FbJoIxppuKFgOEe++TFYzGX4/zNUBRBKwalIllYexPMGhpvyAtoP0qMnh9Wq1oGDNJwOf3ZmuIfS0KLwL8mA+E6Nngimay1+Ip++2S6jK9k4p1rWXxxeIpaTxD7UZTm5/2bKE135qz+HiG7DY6VB1COqc7QpMSabWItdOqUthqep1Mm/JOHk3K2OPx3Z4LZ/2Z/W1aTvoSERab2Cr2rQ1HutLleDY5kec+0s7m/Uw/NsMO/6HjNjLnWtzDpnDN7ClD/aT0h4XrxMULNfVn2zIp0+YrHOwB/y00622JsyCjPPPgrX0qt7/MuqQmoND2B9W6xh/kzt/hjK8QQn3E331GNp3pxIAoVpdl19Adr2FJtrhj/byO005T3sSZfZNmsBFTc4 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: dd0e8149-d2a4-464c-7316-08dd50a31653 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:05:54.2155 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: VESlC8xCN8tBAyW8gsi1XokpCSLezodMJv8AfQqpdRQTDHuM3ow2v2JC41GGB5lznKY1uTdNQM3BSJHh4TF+7g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 The only users of pmd_devmap were device dax and fs dax. The check for pmd_devmap() in check_pmd_state() is therefore redundant as callers explicitly check for is_zone_device_page(), so this check can be dropped. Signed-off-by: Alistair Popple --- mm/khugepaged.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5f0be13..7eeae33 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -958,8 +958,6 @@ static inline int check_pmd_state(pmd_t *pmd) return SCAN_PMD_NULL; if (pmd_trans_huge(pmde)) return SCAN_PMD_MAPPED; - if (pmd_devmap(pmde)) - return SCAN_PMD_NULL; if (pmd_bad(pmde)) return SCAN_PMD_NULL; return SCAN_SUCCEED; From patchwork Wed Feb 19 05:04:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981498 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2088.outbound.protection.outlook.com [40.107.223.88]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F5EF1B85D7; Wed, 19 Feb 2025 05:06:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.88 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941566; cv=fail; b=QOjDNT9u39a1XYtbx5YdZl5yiaBOZXgn75PoQ604AHBp0Q9pheoqXY0F3GIO7jdvsz0JLymtTqUkbPQ+zZUubTKD12w/izL2hJrFJXtyaIzpmWDK0sait5a0ebRLP+tW3esFKmiNJ6sspfI7qMNsk+r4loUER77uIn/hBxvESQY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941566; c=relaxed/simple; bh=7NiUpfir3zotCIBT/cA4oej9GsPgt0J+dHLc8YSu4E8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=VoFBskqEb8bK9SwnjwqOECADFtAnCo24dbySGI86yphEr8m9aKeL+G9RjMnhrvJI/CzEpnqmsfXT0Ujm5QfJOQScpbMHG+Z1Q06ujWUlPRc3QhOi5p13vCTOBD3tXC1H53cs+XcU1R+Q5+8L1BeETp2gd7LHST2yptXv8Thj4qM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=l7wf6QWb; arc=fail smtp.client-ip=40.107.223.88 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="l7wf6QWb" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=T173C74JzMSnLgPfxGd0Qm6AwJsgAMzC7JV9SlcuFYD32wl782My0CQeSCKUQTZvayiTjSuhkeR6usrgSV4bUZCkzTe3ny7KmY5K0LOA/J6mCwN4WnOicAij681b1pnGyh26UQFDB47U/TinDSEOqY4oHG+SDOECKSSxBk1tSfP8KYLIhEvR0fzk/qbWCwd/qvuTE9yM9MOFhq+/iwDuZOblYZqyZ47ii20GY0nUbkZmB4Lg0Smz373yijKNs2Efk53I/h2fgAJZCK/A81H0fSQEIXrYp5gso06dQrvlD3SZpwJDzDnuaEfsEKEqKQZ5lJjlWG4WJ8cys8VbyhmVZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6QgSQSwIHsLXFNKIPYPmsT4EsJTUoBJcCilF9cqnw9o=; b=uYqCtbpSZbIB8t0NFH1z7LemK2D/W7t5zZF0jj7bijN66euIjWomYPmb1y0FUQzQAwNgoc16B8vi42wSmIg0qwH5ImHGyXyuwRzXuRSzibSvy3fxPOyNHv00EHN0fRHdyfZM+RN183a6WCiA6OfOqahwTQf9dsVGEOzyvzMwyjGgpF4qjnm6PG9NF5Yz2TBtk4SOCNTirkQggP0SrxxoMTYiuGrHZT0cFZbQqtXlQHPLJt7e3Qnl1w4iRBGsX7G4Whfb1ixstgiDOWsnraM4PmCrhmlCs624QYjG8GQaT4mVKncR4G/V/Cw+xyLrjYsn4947CvkCclIaxDONT+B4iw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6QgSQSwIHsLXFNKIPYPmsT4EsJTUoBJcCilF9cqnw9o=; b=l7wf6QWbbguOvpS/L4Ij/fAd8kk/RQkqsP9JZNBaoMqL7/OACTHTD8YHYdcDkNBe6Lf33pu0wRc8fkYTRCTjrVFkfzMaGRqbxfZc9J1P35lEn8ZPMEPbxtxZYEYg3bHmONSLfQsNv3Nh3tyUbGBQGEZPL1nQuSQqumJzfaEqgmFRadd66d0uj389oa7ybSk8dn9KlKLBMFEmO230sFRKkTtvbnanqN/uvckbx6qg2yoPVQSnIpKzsYn56E1S2hsnkqcT5yWIoZVD4YTAHR5sYzMxeQ9X6wJo3xgRUQKrqGpAczhInMmpxaGiEKmt+vapMYUkpHVD+JEUtCfBvO0a9Q== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:06:00 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:06:00 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 09/12] powerpc: Remove checks for devmap pages and PMDs/PUDs Date: Wed, 19 Feb 2025 16:04:53 +1100 Message-ID: <62587f381d0e718e9f456f375885d93c80ef1110.1739941374.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SY2PR01CA0012.ausprd01.prod.outlook.com (2603:10c6:1:14::24) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: fb64d1ec-aee1-4e70-84d6-08dd50a319ac X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: TTyHn75Ts4fFRA9e7fJBtgbTSyZGWND5c7NSsX+WwZOXhU0t6fvHvvlZPC1/wh0OSjBd8FY1a4BzXh/Rkbaa1QUybzXd7XtXurGeWJbBFU5pbqtlTQxz01yYh2XfvXqtWwQohTCfCMvBMt4kTv6wkPvNax4UQhyE6p3i5D/1ynlPHnGzjE7IYIczYjdRFXAYiOyskUf8imEwofNDTINdG2Cbo9uFcbRvZlsrBj/0TsKoQfrf5En1UTCnTv9LDj+Ckwycg2MNo+tsOViFEYlMavuxe4F31gAqqNxoOOZGFmsltcyvz8kzfqo9Gpkezq1jcJY3x/ymKYyI3470EfWxK3MihvDjwhCfY8kwb5S7AIs2RIIPgkiLW0/AxjV64+xuVJ36jFj44lMFI2bPQkKMIcNMG0b/C40jm55y1C0mzcRol8Ku9SsajEgR0Kms+cOEUOscvTjUBbIeX45U3qpBwDDr4lRUXnnmzZlAGP6GaPoD/mEtx+fHr/ePNcFrbHZjw1mJhj9hSb4J6IwSYMauEcPbVJynNRQLZSiEpZTZVpat6toLWDTf0Xbv1Na9v52Q+oBN1hakoFaWq/HRgi8uQPrXl0j0XzTBub3yg/3bqAUK8JWVJhPWZ4EgDWTX2m3rL1tJrz7vdDzLGZgcEEiT/FEKscLYEWxnSkZ6MhFWx9zjy0DeU196ogu4xmvbOquGP2bqhRf65F+uLoDVW0Kk6KWowcRtPMzBoHZt/Oy0bltE5gxLqZLvymrqhxgqg5oCR8gV1MXXJ/2x8CntBM4IPqUNxdMbbe/34B5kW2BdDnKqfo9Fqpmv/ndvLeMD0pJvtnji1mhu0QKqdxeYeFVMJ72BHzqawQ4XXIhGHz+iHpGwuxsTXnxrVStbc6PnEiM3tu2hkyKyUT0wSxevxHfnQszmm9PTFj3VpdHBRHou/wKTHTrO5YC6RdFZNARWICJsMS9gI4UusuNUeWOaQGgHlNrAHvrJ4HkYBYeXAJBSG96J7Z9+0818z7zMN0x3fOg/lYh+HLvPmPkIMtUBTxYDu1TCQ5/pthmFJgD22DZO3oeqvtz0M6qqKBU85cRNu6bLuKLs/tELyozZv21D3U2Jqdsh5DWTvTQZRuv8lUKqY8un9G6ljscsjdA+nbPZ0dSpPa08WR2OjoTteT9g3MinhhDewtxGrm+YGiVgyYUgOLSkpzno0nScjKa4I08m5lkIuDnNHhv6a9b6LD4zpaEQ8K9IPOLmzZcfzh8Zlp+0xa3Fg8G0uR7p/4cV8UEcsKWhg3ucTmfq/UorCpmRsWhJDOtgy6weHYJXxLQ23aIyYlp2qn4i01Qr6os0HvgCKG8y1qoLOThv1pkfYu8Dg3uId6RfYnsmtOQijEVCZacsuQKDgaUhXxtLNDQ/NLjbfVRO X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: JGD2NJE67FdJjDn9DVZ6tPncK8rOYBAR2MI8iamaomVl9nuif8cR7+XeVRcPrhXYmk3adjW/BVP/EqInYCK09uV3JflS8ckyEYsr8LbY5e4VK75VSw0U+uOkl9AtTNbd93JMUi4t8K30ZhJ/KgzIwyMLVDyY2EOWQjAj/hIDBiu8LVITYIyWi8Jt5e1QTf6UTpzFpXFiBl4EJxS0d38fkMNv4N5qqsA3UYiHfB5yzhzG19/O/qYDd9j1ycddtRnkKIFplnuaAiMf4pn2thyGj12bDb/9Md5GeCuBH+F3c4VnpJDfI7r9Z/qHRPKWHkVQ6sS2K/kuCw7BdSayFHJn7skFssZdBSoGm1Q46fA0GboIugnk/SM2kYBLYxYgUhu0wUefHtftnpdJ8t7XxlZOfNbMfxOLvC8xz5spvO79MtnUlmu+oQD61DuS48fd7Hp4q03VLrPCLiNqpr4s1pyv6Q/eJSnyDPj5Cxwi+6sVK1pB/FkAfyGOtMjo2h2ZCuJRudQm7LJhPfksjSTs/Siw648desA4X9H4K4JZ/TyoLcuvZP66q9BD3yJsDSwT9gdxkFO+KLlETRF4gsb+TtnbF24Dji0pMTKqEdCthoRzM2jamBKamNrDrPP3D8fWyV9fwKZylb/coDw7qdPZpGqzOYWK6uwn6gYjUtT3dd2o2My4rNs6g6eW01jTpTxbAGrgluAvIPmUkWOWG+GJDBgvr42C7MrKAY2D8CV723lv/2YJno1gKOOOhZVkljny63cLZPIj0ofmx3oo/311t6uX4u0BabtxbvFR3Pw6BfGOJTGMZQMytS8VXxszf+KedWKJAATO24wnVuLIvmKQ/itiKsUxB9A1ccIla+MpPiWo3OzVLbKCgIrvztxYvCJineFHf3IuhxyhGOZ1aCMYAaFp2GzknEin90M/rp/rD+x2Yvi4qu4bYS8rvQb9EKBWDk/fRh0t+ZPQg4SM8MR3owPhVDfSMJdsUKDplK5d6/waGLrPTiv2OxIMQSmABCj/mn3t/OXtFrmyCkVh4Dkox2X6tt7/mkkzVtWFq2mFPEviZTZAD88fNyIqqfdX1h0qOtHA0GxMNXQKLJ8bEcF+ih8CFoMvAHaAWvyqlv1S7qvpZ0rj/66S1b625sDxUn2O+wUXEHa1FwzUQbS3VWrZ504fCKJK3odPUo3fR0B/b3z2pQ0oU2/d5K2QiSGj5KtNGM5+ppHGPXs7VZhzI0fmRI4/rwlX6gVVMCziBRPxTLAmKnj7Kc37dsO7YbeisoT5EsKx7Lj4rCBwf2KXNhCXSl4IX2lT3lsg9UGOvQq4bI5w8CBbuWsP+4pnHYpQJG0plGaA0zemSa8kepvC++uFYpaOBQfOBSrpeYcWS3v6+1ltsP3WBL++mxXlZPjDii+G/d3TekjYPt0IpBDPYY8iwiUFSJEmFKXNTISF1M8LXrZSdnvl2eYtHHBUfKnDrHD46x2DoRJLyvpDSD6pZ/eBjpyfsGnoiNkyMTgRjngNdqYUpEToeLoxWEXaq9V7SGrGEAeRtL5DqpWVx6s/Gz0iFr7uuSxDk7GY4mTWd+a8F3wr1uS72xRKdnYyzmR3OHFTjw0x X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: fb64d1ec-aee1-4e70-84d6-08dd50a319ac X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:05:59.9414 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: EUWCgjeyZIbqZTSxSRuScTS/lpBY+L0XkkA/xJrRpt8iqaL1ip9f3U9yU2sJMIvJavm2ue+cCX5E0dLs6D2nIg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 PFN_DEV no longer exists. This means no devmap PMDs or PUDs will be created, so checking for them is redundant. Instead mappings of pages that would have previously returned true for pXd_devmap() will return true for pXd_trans_huge() Signed-off-by: Alistair Popple --- arch/powerpc/mm/book3s64/hash_hugepage.c | 2 +- arch/powerpc/mm/book3s64/hash_pgtable.c | 3 +-- arch/powerpc/mm/book3s64/hugetlbpage.c | 2 +- arch/powerpc/mm/book3s64/pgtable.c | 10 ++++------ arch/powerpc/mm/book3s64/radix_pgtable.c | 5 ++--- arch/powerpc/mm/pgtable.c | 2 +- 6 files changed, 10 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_hugepage.c b/arch/powerpc/mm/book3s64/hash_hugepage.c index 15d6f3e..cdfd4fe 100644 --- a/arch/powerpc/mm/book3s64/hash_hugepage.c +++ b/arch/powerpc/mm/book3s64/hash_hugepage.c @@ -54,7 +54,7 @@ int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid, /* * Make sure this is thp or devmap entry */ - if (!(old_pmd & (H_PAGE_THP_HUGE | _PAGE_DEVMAP))) + if (!(old_pmd & H_PAGE_THP_HUGE)) return 0; rflags = htab_convert_pte_flags(new_pmd, flags); diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c index 988948d..82d3117 100644 --- a/arch/powerpc/mm/book3s64/hash_pgtable.c +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c @@ -195,7 +195,7 @@ unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!hash__pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!hash__pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(mm, pmdp)); #endif @@ -227,7 +227,6 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres VM_BUG_ON(address & ~HPAGE_PMD_MASK); VM_BUG_ON(pmd_trans_huge(*pmdp)); - VM_BUG_ON(pmd_devmap(*pmdp)); pmd = *pmdp; pmd_clear(pmdp); diff --git a/arch/powerpc/mm/book3s64/hugetlbpage.c b/arch/powerpc/mm/book3s64/hugetlbpage.c index 83c3361..2bcbbf9 100644 --- a/arch/powerpc/mm/book3s64/hugetlbpage.c +++ b/arch/powerpc/mm/book3s64/hugetlbpage.c @@ -74,7 +74,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, } while(!pte_xchg(ptep, __pte(old_pte), __pte(new_pte))); /* Make sure this is a hugetlb entry */ - if (old_pte & (H_PAGE_THP_HUGE | _PAGE_DEVMAP)) + if (old_pte & H_PAGE_THP_HUGE) return 0; rflags = htab_convert_pte_flags(new_pte, flags); diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index ce64abe..49293d0 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -63,7 +63,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, { int changed; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(vma->vm_mm, pmdp)); #endif changed = !pmd_same(*(pmdp), entry); @@ -83,7 +83,6 @@ int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address, { int changed; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pud_devmap(*pudp)); assert_spin_locked(pud_lockptr(vma->vm_mm, pudp)); #endif changed = !pud_same(*(pudp), entry); @@ -205,8 +204,8 @@ pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma, { pmd_t pmd; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); - VM_BUG_ON((pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && - !pmd_devmap(*pmdp)) || !pmd_present(*pmdp)); + VM_BUG_ON((pmd_present(*pmdp) && !pmd_trans_huge(*pmdp)) || + !pmd_present(*pmdp)); pmd = pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp); /* * if it not a fullmm flush, then we can possibly end up converting @@ -224,8 +223,7 @@ pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma, pud_t pud; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); - VM_BUG_ON((pud_present(*pudp) && !pud_devmap(*pudp)) || - !pud_present(*pudp)); + VM_BUG_ON(!pud_present(*pudp)); pud = pudp_huge_get_and_clear(vma->vm_mm, addr, pudp); /* * if it not a fullmm flush, then we can possibly end up converting diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 311e211..f0b606d 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1412,7 +1412,7 @@ unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned long add unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!radix__pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!radix__pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(mm, pmdp)); #endif @@ -1429,7 +1429,7 @@ unsigned long radix__pud_hugepage_update(struct mm_struct *mm, unsigned long add unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pud_devmap(*pudp)); + WARN_ON(!pud_trans_huge(*pudp)); assert_spin_locked(pud_lockptr(mm, pudp)); #endif @@ -1447,7 +1447,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre VM_BUG_ON(address & ~HPAGE_PMD_MASK); VM_BUG_ON(radix__pmd_trans_huge(*pmdp)); - VM_BUG_ON(pmd_devmap(*pmdp)); /* * khugepaged calls this for normal pmd */ diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 61df5ae..dfaa9fd 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -509,7 +509,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea, return NULL; #endif - if (pmd_trans_huge(pmd) || pmd_devmap(pmd)) { + if (pmd_trans_huge(pmd)) { if (is_thp) *is_thp = true; ret_pte = (pte_t *)pmdp; From patchwork Wed Feb 19 05:04:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981499 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2062.outbound.protection.outlook.com [40.107.236.62]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A5AE1C5D4A; Wed, 19 Feb 2025 05:06:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.62 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941571; cv=fail; b=Nalq9YXJWTUF9H7WSrazve/y1HkXxEHBpbI2Ub8TSyv4UBxDMj6fLpAj5elkYF9jM/apriImxswjfvDhGemO2uQdYNbO/foVEFoAN7o/WonCrbN5aYGIP3yJZtXWAqiXEydeJBnP3tRrbfWlupZL+5UOksvH5z+irhaUXcoAG5k= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941571; c=relaxed/simple; bh=LtlAkb6smr/X+A4H4opu/PiB87gkCMcc/zampDV9MHw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=EqGNZxgmGLCnIBNBxnUuLiIlx5ERPpSGe+/vOdiUTNM82ZEG5kHcPFc8+vuMjPtakAkogkpO1OtZGXXrMY4bHxFKuvrqK8SBXl4jKhfLxJ8arRq1oRXQPtLBryPn/RXEUOzVOqX7XTRGaESFRpwJqeF1ZYsZ438YQYRlCiKMUrI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=mYU1MxRO; arc=fail smtp.client-ip=40.107.236.62 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="mYU1MxRO" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=rrX+sa1gfhx1BAsuCIC7hkXIQepUqesMr7fSAxx2GXW8SXnEO7pcpga9J4Rzj/3EyZFGnwup2Ur1WW+COkhZDjouPMd8D+WPTw1MmwXbjEWO16RKijuD5VNUdqZM/wGmM3HkRt3BvWe56dhXqcmaXQbkiZ17mHFPVzC+5RFjec0H1iijfqCa8JRqG5Z8WK2mVWGxP06KlLLN/CdIo2cqLgQL8mgBeXMvF0Nk9+QQPUxP180O/IuXwjPLeSg1hhkIwbuldc4K5IVpmGxegGZtxO8uGT4/2Vi2UeYBtFRvjzoCiejiEz8xrlDPkKILnkLCuJ4MaFEklhNvQjsmlBQVrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mYDFMXKgI1vK9/LUqIvGwkSOQzxdAjLXEPaVFgcm/ZY=; b=ULnY0y1aWKVWW63I3xJWx0UZNGmGbTvWB55dHYNt+iZlwDhpuR3iAj2aoa0CckwsamAkHOjzftAT9HJNQLNqbffs2USrTlpJeSS0NE2CWYVfwjBM8lb5oeHl4+e8hSO++JkeHya30zUpfWiwVUzR6scyb2QkFZ5yHkNma204jK3sYVLBdTN4HEj+hX4j+RaVbQO/w5ov+1SMGqu//WC6whkstSYovgzc5hHzwOn3xy2tHm32TDr+5eUI0uZN5vXobqR+92MOShtUlUbWB3H1OvFT4Tb676h1a1Gz1ulJbfGArD1PkWUD0xXPBU/H+Ujbth5xvKmF1nLP6JZm+X+xEA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mYDFMXKgI1vK9/LUqIvGwkSOQzxdAjLXEPaVFgcm/ZY=; b=mYU1MxROzjShqs9oLUD3ux8qSpowZ2WUM/Ihj2m0L54J1uCph30i0XIyNaJwMV2O7W7Psl87NzKvUVdz5T1GynjO78fg8qYvDAr8DVY/O6aorqiuF5ygACRmgZxVBUjlZ7ckOpsEAZPIc4Sc/Xzr3SC4mQCrBEy0CGwR3DegXx5rb36TvSFd8wzoqTR+pBOsLy0Ykhu2ysfQYbTU8XIhCEpf40S1hNxOSyUqSQOltX72D7lY4GTSnA3S6pYhRvqh3gHFiEAYt5FZg+6t6Q5p+c6G+yN5lNMDLqpKLULIMrgzLfZ6Jia6fELvAphvAg4E9ZZoxls0T8hdvdC3gahO8w== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:06:06 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:06:06 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com, Will Deacon , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= Subject: [PATCH RFC v2 10/12] mm: Remove devmap related functions and page table bits Date: Wed, 19 Feb 2025 16:04:54 +1100 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SY8P282CA0015.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:29b::23) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: bbacb060-f6e9-486f-e6c4-08dd50a31d27 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?q?G63L4G+IASPbYqRUYkVZ7+8DkCE4XUm?= =?utf-8?q?R+CehnPLyQVWXDtoCvvmN5z/8MVxsdO365dcdTopt5/D7CNr1zORle6IpsWr1Bws7?= =?utf-8?q?DMkwS6bBfvQIPBdff6w9XnjGfP82eulB8Hymu7GY3yWPXFRRQG6yfbgVrGa1pqHnM?= =?utf-8?q?WBrtJmLbLP3Ntiwi6BJ88CoB9v00MYzL3cmG65V9aVUYEqv8aSOcYmtNWTx8MY/hm?= =?utf-8?q?nzFaE11gGeibEmOwntKhpqStZ9CiYON0ZYP/JlQN3a5JMS3j7qb+uaO4mCzGOAawp?= =?utf-8?q?kSu6l+COKW1nyAq1qMBgJQDG/hZ+MW3MCPiyWevy+pRrEtxTx6zKBU13mbEcwTcrC?= =?utf-8?q?EHsxbtou9vnH2uDQQLZIwb8pYzbVBa5ujx7/F/Wx4FCStWq8rXZR2v7e5AP0lx5tm?= =?utf-8?q?5P/CtOl+8NbK/n5RACGjWNwnZ5jXhlqTmu1hXCtfK2WPWAJGfyzaI5O2nxSsOti2E?= =?utf-8?q?tQzuR6rI7hgEni4uJHrQPwn6/vImptMWGwCl9gEjkG/8jDfGL1gDtT9Ys49u16T9e?= =?utf-8?q?8a3QkYiMdXdY5476x4wjmfEodcqBUDX+r52iky0A52yoVEjK0aZBpCCCFryElax8G?= =?utf-8?q?rLfCqI84uUp+U22h6wlH64hHFt8yCdHMQ37pUThQK9ZUON+tkxbHIroFMx1gXFalb?= =?utf-8?q?tbn9vlrhf1dQEQqGHeRZ7uiww92CU1dSaBEA7/aC+yT042K79vod5ptkyj8RbHXzw?= =?utf-8?q?orshiIEOjDrCB0QBLJCKrkcZ+mm5BhRBkXqtdzeCEzaiDcgiBv8Ea3Qhs1LWfZKZn?= =?utf-8?q?cHIpAJRBVXlUAQq0EjZ5cUCvZKDtoeJBcQ4WzfxwLWCMlJCuaFwXByAy6rNYWt46Q?= =?utf-8?q?wO/5CBXmP/UbrbgPgk9xHmTxf2zRDV0e36Y+Aazsc8vluenNzePGX/MIFwa5+Rca+?= =?utf-8?q?soX1Vuj1x8tBlBNNhowA3G18Ap1D1gAJVYoFQbkS5NMX8oGeqi+1CzbjQYV7H4WYh?= =?utf-8?q?QUu0MFVuBx+tISjw1b8W+lfVUEvUb4gfo00Yra1Tt7A0x03mqPijtaEDlmEBeCkR7?= =?utf-8?q?uX387jhPH+1h9zkyrC3GIGstJgiR9nJUrWlszBA5/CdMobCY50vmijKK4M0DzMkW5?= =?utf-8?q?i8H96OcQ3bxnNJQryqy1nT9p48smWFp3GWhRrNuJ/Vv0ulFPOYph0RDgEhI/Y+KaK?= =?utf-8?q?rF52pfcvS+l0PjrI7JeKRUKtlqRewJOcYXZPMtx23T8DkwnuKn0PotFqCc0JfRE+S?= =?utf-8?q?BkYQZ8vPeTFiIzxopgXXruJ7jCvagd3hykEhkaQoHfFeSOwBTGZItNciiODw6JcrD?= =?utf-8?q?HcKV5qp02JkKK9RL/ywq7Li2e0nnQDWWaJlfNAHkfpFk/RA3PskkUcdSg41KoqknC?= =?utf-8?q?V1GbBv5iNxek?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?q?5wjbBY2vEXj4EJ6LH449M2o68ZAN?= =?utf-8?q?MKgy2OUMKQ1DalKZFPGiJ5Cft+tr04yNdkjGaAGy7F2goGw3yi3ovIXuwoTiUzHEZ?= =?utf-8?q?e/UQvwDJad+kSOeLIOUM4PFK3BP4wTOddXMVLnQPIKyXHZMWcghzCqQR0QAJnETXz?= =?utf-8?q?gLLuTfxPOofcACy11tafx2j075pYf1NXSO/nC7OI5VjtZFj+nWOFLUlFLvUBYuH51?= =?utf-8?q?jUbEXNN/2hN3Qm2qf2qc4OBFBp36ZlfzldXXj9t8vafY0Mf1uRhuz7kQVtgw4AVo0?= =?utf-8?q?choP02pkoEAbu0IRrtM6vJUINWSTIzVBTzG0hL8zWVh085E8aJtrpEnyuLOQ1DSkm?= =?utf-8?q?0zLK2q//KhJEUFnlCPqwByfRNXoAv+7yaGT/pi9HuRgU/ZKZQek8iI/8HQs/C7WUC?= =?utf-8?q?7nwEhJBOvp/1q9edMeRP4b1I/Wx/XICkiFzpeky6psFFCRFnYJjBJxKOSh2xVQnol?= =?utf-8?q?Ybz8faYYyT3FI+thLFj8pNblJLlPz+X68TQyrVTZDkUDfXWglzJk9Zo6g9IQPhidP?= =?utf-8?q?1YVTx94+K2Jyxfk91zV0yvCGl02XvL0s2QILIwlqlbYwicxNKMB1agJJt3tw/3yoU?= =?utf-8?q?ZRRdR9b1IOkEjjzgwYICcedXerFUFngdX0cPyNJsEUAhIdYV6Lm1IgWL70Ba0qB7S?= =?utf-8?q?UoK0yFLHwzqHmnDBYMttuhMTD1fWEOzKxFjTo1Cybt5wNefF1IZAppajDAaQsZGPa?= =?utf-8?q?mlFvkaWHMck5L/SpXdc/zoBx+b2rMTNAa/s3um9zO+tCx2w6JSZCr6SotrQf7GlT4?= =?utf-8?q?gQVWIEHF8dQ8hUiriVDEl/iHEt4vdCWNwQixAxhRl0A7DYnUAulwvAGBPUC/USR1/?= =?utf-8?q?E4pl71BnYZBMQVRWi+EOPOz9FtK5W9rPO1Zb6IXtPVtF45g3NsPe2d3SJWrTsXFhB?= =?utf-8?q?hVOEzxZwY96+4hMvS4gTFscZPPSd1X8Ts1b7ImtSaO5guY0u5Q4ahe7xjwx4OeUfV?= =?utf-8?q?GOmaxsx8UEZybqLZVOsfxBLa6QsPBv7g7SCjvNY0+ho5cXZUjxfQjQWp4OEDpzhdl?= =?utf-8?q?KAq/5dsnRyifvifUvVqYh5z9sdFvHgWIi/+b+y17Jr4M0Yr3EcCHZzn4K2iMDYMjU?= =?utf-8?q?OlVzMGXPvlkBCHOTkJzv0E8Taqu34fcbYo+VEz0YgCcePHFWxe1v20MYDbktQyQQT?= =?utf-8?q?taPJ+BHrjWUXTcpPUvmxNVaAvU/HwHoPQ3fByTe3bQOd8hq0NdTNws/MVL0zo5ZlQ?= =?utf-8?q?eUw53HzDDxKLmdMNC0Ca/0txbzqUzI4khHUqW0JOVK5uTGjIdKmPPncNLo7o1I8A7?= =?utf-8?q?AMq8mN0F8d/Bdd4CxZx1zZeBDpWK7Q2B2XWKrFeLjE3fYxLM+wt2JBlzH46aO8Ydw?= =?utf-8?q?yxXjnC2RbAwcnEM5qnMEpjYne6cywbnZwtu6LZvew3poP5i5AhkiY7xVSO9IAVs2l?= =?utf-8?q?utN6kB4mV/zPjUvDqwXv+8fRzFZcQPlYP62hib2NAd2fuHh5KjYp8z5UMNd8me7DQ?= =?utf-8?q?UsfuUbx1GL43tQD+MMaSR+TaD8ajEaxFV2XHO7NRCYkmSPT6ThjjmtlBjHnjnjJFK?= =?utf-8?q?mcLeZyQ/p3dk?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: bbacb060-f6e9-486f-e6c4-08dd50a31d27 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:06:05.9394 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: OplUHcU4vomSs2LaA5xdXupkr4u0dQmwoiLKSG4O4pDdMuf2eWNB0cI/yqaoRpUk/pVznaY9mLVzfC7n6y1dVQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 Now that DAX and all other reference counts to ZONE_DEVICE pages are managed normally there is no need for the special devmap PTE/PMD/PUD page table bits. So drop all references to these, freeing up a software defined page table bit on architectures supporting it. Signed-off-by: Alistair Popple Acked-by: Will Deacon # arm64 Suggested-by: Chunyan Zhang Reviewed-by: Björn Töpel --- Documentation/mm/arch_pgtable_helpers.rst | 6 +-- arch/arm64/Kconfig | 1 +- arch/arm64/include/asm/pgtable-prot.h | 1 +- arch/arm64/include/asm/pgtable.h | 24 +-------- arch/loongarch/Kconfig | 1 +- arch/loongarch/include/asm/pgtable-bits.h | 6 +-- arch/loongarch/include/asm/pgtable.h | 19 +------ arch/powerpc/Kconfig | 1 +- arch/powerpc/include/asm/book3s/64/hash-4k.h | 6 +-- arch/powerpc/include/asm/book3s/64/hash-64k.h | 7 +-- arch/powerpc/include/asm/book3s/64/pgtable.h | 53 +------------------ arch/powerpc/include/asm/book3s/64/radix.h | 14 +----- arch/riscv/Kconfig | 1 +- arch/riscv/include/asm/pgtable-64.h | 20 +------- arch/riscv/include/asm/pgtable-bits.h | 1 +- arch/riscv/include/asm/pgtable.h | 17 +------ arch/x86/Kconfig | 1 +- arch/x86/include/asm/pgtable.h | 51 +----------------- arch/x86/include/asm/pgtable_types.h | 5 +-- include/linux/mm.h | 7 +-- include/linux/pgtable.h | 19 +------ mm/Kconfig | 4 +- mm/debug_vm_pgtable.c | 59 +-------------------- mm/hmm.c | 3 +- mm/madvise.c | 8 +-- 25 files changed, 17 insertions(+), 318 deletions(-) diff --git a/Documentation/mm/arch_pgtable_helpers.rst b/Documentation/mm/arch_pgtable_helpers.rst index af24516..c88c7fa 100644 --- a/Documentation/mm/arch_pgtable_helpers.rst +++ b/Documentation/mm/arch_pgtable_helpers.rst @@ -30,8 +30,6 @@ PTE Page Table Helpers +---------------------------+--------------------------------------------------+ | pte_protnone | Tests a PROT_NONE PTE | +---------------------------+--------------------------------------------------+ -| pte_devmap | Tests a ZONE_DEVICE mapped PTE | -+---------------------------+--------------------------------------------------+ | pte_soft_dirty | Tests a soft dirty PTE | +---------------------------+--------------------------------------------------+ | pte_swp_soft_dirty | Tests a soft dirty swapped PTE | @@ -104,8 +102,6 @@ PMD Page Table Helpers +---------------------------+--------------------------------------------------+ | pmd_protnone | Tests a PROT_NONE PMD | +---------------------------+--------------------------------------------------+ -| pmd_devmap | Tests a ZONE_DEVICE mapped PMD | -+---------------------------+--------------------------------------------------+ | pmd_soft_dirty | Tests a soft dirty PMD | +---------------------------+--------------------------------------------------+ | pmd_swp_soft_dirty | Tests a soft dirty swapped PMD | @@ -177,8 +173,6 @@ PUD Page Table Helpers +---------------------------+--------------------------------------------------+ | pud_write | Tests a writable PUD | +---------------------------+--------------------------------------------------+ -| pud_devmap | Tests a ZONE_DEVICE mapped PUD | -+---------------------------+--------------------------------------------------+ | pud_mkyoung | Creates a young PUD | +---------------------------+--------------------------------------------------+ | pud_mkold | Creates an old PUD | diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5cf688e..c1118c7 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -42,7 +42,6 @@ config ARM64 select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_NONLEAF_PMD_YOUNG if ARM64_HAFT select ARCH_HAS_PTDUMP - select ARCH_HAS_PTE_DEVMAP select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_HW_PTE_YOUNG select ARCH_HAS_SETUP_DMA_OPS diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index a95f1f7..8530663 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -17,7 +17,6 @@ #define PTE_SWP_EXCLUSIVE (_AT(pteval_t, 1) << 2) /* only for swp ptes */ #define PTE_DIRTY (_AT(pteval_t, 1) << 55) #define PTE_SPECIAL (_AT(pteval_t, 1) << 56) -#define PTE_DEVMAP (_AT(pteval_t, 1) << 57) /* * PTE_PRESENT_INVALID=1 & PTE_VALID=0 indicates that the pte's fields should be diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 0b2a2ad..596b8dd 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -108,7 +108,6 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) #define pte_user(pte) (!!(pte_val(pte) & PTE_USER)) #define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN)) #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT)) -#define pte_devmap(pte) (!!(pte_val(pte) & PTE_DEVMAP)) #define pte_tagged(pte) ((pte_val(pte) & PTE_ATTRINDX_MASK) == \ PTE_ATTRINDX(MT_NORMAL_TAGGED)) @@ -290,11 +289,6 @@ static inline pmd_t pmd_mkcont(pmd_t pmd) return __pmd(pmd_val(pmd) | PMD_SECT_CONT); } -static inline pte_t pte_mkdevmap(pte_t pte) -{ - return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL)); -} - #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP static inline int pte_uffd_wp(pte_t pte) { @@ -587,14 +581,6 @@ static inline int pmd_trans_huge(pmd_t pmd) #define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#define pmd_devmap(pmd) pte_devmap(pmd_pte(pmd)) -#endif -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - return pte_pmd(set_pte_bit(pmd_pte(pmd), __pgprot(PTE_DEVMAP))); -} - #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP #define pmd_special(pte) (!!((pmd_val(pte) & PTE_SPECIAL))) static inline pmd_t pmd_mkspecial(pmd_t pmd) @@ -1195,16 +1181,6 @@ static inline int pmdp_set_access_flags(struct vm_area_struct *vma, return __ptep_set_access_flags(vma, address, (pte_t *)pmdp, pmd_pte(entry), dirty); } - -static inline int pud_devmap(pud_t pud) -{ - return 0; -} - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} #endif #ifdef CONFIG_PAGE_TABLE_CHECK diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 2b8bd27..0f71710 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -25,7 +25,6 @@ config LOONGARCH select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PREEMPT_LAZY - select ARCH_HAS_PTE_DEVMAP select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SET_MEMORY select ARCH_HAS_SET_DIRECT_MAP diff --git a/arch/loongarch/include/asm/pgtable-bits.h b/arch/loongarch/include/asm/pgtable-bits.h index 45bfc65..c8777a9 100644 --- a/arch/loongarch/include/asm/pgtable-bits.h +++ b/arch/loongarch/include/asm/pgtable-bits.h @@ -22,7 +22,6 @@ #define _PAGE_PFN_SHIFT 12 #define _PAGE_SWP_EXCLUSIVE_SHIFT 23 #define _PAGE_PFN_END_SHIFT 48 -#define _PAGE_DEVMAP_SHIFT 59 #define _PAGE_PRESENT_INVALID_SHIFT 60 #define _PAGE_NO_READ_SHIFT 61 #define _PAGE_NO_EXEC_SHIFT 62 @@ -36,7 +35,6 @@ #define _PAGE_MODIFIED (_ULCAST_(1) << _PAGE_MODIFIED_SHIFT) #define _PAGE_PROTNONE (_ULCAST_(1) << _PAGE_PROTNONE_SHIFT) #define _PAGE_SPECIAL (_ULCAST_(1) << _PAGE_SPECIAL_SHIFT) -#define _PAGE_DEVMAP (_ULCAST_(1) << _PAGE_DEVMAP_SHIFT) /* We borrow bit 23 to store the exclusive marker in swap PTEs. */ #define _PAGE_SWP_EXCLUSIVE (_ULCAST_(1) << _PAGE_SWP_EXCLUSIVE_SHIFT) @@ -76,8 +74,8 @@ #define __READABLE (_PAGE_VALID) #define __WRITEABLE (_PAGE_DIRTY | _PAGE_WRITE) -#define _PAGE_CHG_MASK (_PAGE_MODIFIED | _PAGE_SPECIAL | _PAGE_DEVMAP | _PFN_MASK | _CACHE_MASK | _PAGE_PLV) -#define _HPAGE_CHG_MASK (_PAGE_MODIFIED | _PAGE_SPECIAL | _PAGE_DEVMAP | _PFN_MASK | _CACHE_MASK | _PAGE_PLV | _PAGE_HUGE) +#define _PAGE_CHG_MASK (_PAGE_MODIFIED | _PAGE_SPECIAL | _PFN_MASK | _CACHE_MASK | _PAGE_PLV) +#define _HPAGE_CHG_MASK (_PAGE_MODIFIED | _PAGE_SPECIAL | _PFN_MASK | _CACHE_MASK | _PAGE_PLV | _PAGE_HUGE) #define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_NO_READ | \ _PAGE_USER | _CACHE_CC) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index da34673..d83b14b 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -410,9 +410,6 @@ static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; static inline pte_t pte_mkspecial(pte_t pte) { pte_val(pte) |= _PAGE_SPECIAL; return pte; } #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ -static inline int pte_devmap(pte_t pte) { return !!(pte_val(pte) & _PAGE_DEVMAP); } -static inline pte_t pte_mkdevmap(pte_t pte) { pte_val(pte) |= _PAGE_DEVMAP; return pte; } - #define pte_accessible pte_accessible static inline unsigned long pte_accessible(struct mm_struct *mm, pte_t a) { @@ -547,17 +544,6 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd) return pmd; } -static inline int pmd_devmap(pmd_t pmd) -{ - return !!(pmd_val(pmd) & _PAGE_DEVMAP); -} - -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - pmd_val(pmd) |= _PAGE_DEVMAP; - return pmd; -} - static inline struct page *pmd_page(pmd_t pmd) { if (pmd_trans_huge(pmd)) @@ -613,11 +599,6 @@ static inline long pmd_protnone(pmd_t pmd) #define pmd_leaf(pmd) ((pmd_val(pmd) & _PAGE_HUGE) != 0) #define pud_leaf(pud) ((pud_val(pud) & _PAGE_HUGE) != 0) -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#define pud_devmap(pud) (0) -#define pgd_devmap(pgd) (0) -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ - /* * We provide our own get_unmapped area to cope with the virtual aliasing * constraints placed on us by the cache architecture. diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 6f1ae41..c71bcba 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -149,7 +149,6 @@ config PPC select ARCH_HAS_PMEM_API select ARCH_HAS_PREEMPT_LAZY select ARCH_HAS_PTDUMP - select ARCH_HAS_PTE_DEVMAP if PPC_BOOK3S_64 select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 select ARCH_HAS_SET_MEMORY diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index c3efaca..b0546d3 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -160,12 +160,6 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm, extern int hash__has_transparent_hugepage(void); #endif -static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd) -{ - BUG(); - return pmd; -} - #endif /* !__ASSEMBLY__ */ #endif /* _ASM_POWERPC_BOOK3S_64_HASH_4K_H */ diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h index 0bf6fd0..0fb5b7d 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -259,7 +259,7 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array, */ static inline int hash__pmd_trans_huge(pmd_t pmd) { - return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)) == + return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) == (_PAGE_PTE | H_PAGE_THP_HUGE)); } @@ -281,11 +281,6 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm, extern int hash__has_transparent_hugepage(void); #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd) -{ - return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)); -} - #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_BOOK3S_64_HASH_64K_H */ diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 6d98e6f..1d98d0a 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -88,7 +88,6 @@ #define _PAGE_SOFT_DIRTY _RPAGE_SW3 /* software: software dirty tracking */ #define _PAGE_SPECIAL _RPAGE_SW2 /* software: special page */ -#define _PAGE_DEVMAP _RPAGE_SW1 /* software: ZONE_DEVICE page */ /* * Drivers request for cache inhibited pte mapping using _PAGE_NO_CACHE @@ -109,7 +108,7 @@ */ #define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ _PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \ - _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) + _PAGE_SOFT_DIRTY) /* * user access blocked by key */ @@ -123,7 +122,7 @@ */ #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \ - _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) + _PAGE_SOFT_DIRTY) /* * We define 2 sets of base prot bits, one for basic pages (ie, @@ -609,24 +608,6 @@ static inline pte_t pte_mkhuge(pte_t pte) return pte; } -static inline pte_t pte_mkdevmap(pte_t pte) -{ - return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_SPECIAL | _PAGE_DEVMAP)); -} - -/* - * This is potentially called with a pmd as the argument, in which case it's not - * safe to check _PAGE_DEVMAP unless we also confirm that _PAGE_PTE is set. - * That's because the bit we use for _PAGE_DEVMAP is not reserved for software - * use in page directory entries (ie. non-ptes). - */ -static inline int pte_devmap(pte_t pte) -{ - __be64 mask = cpu_to_be64(_PAGE_DEVMAP | _PAGE_PTE); - - return (pte_raw(pte) & mask) == mask; -} - static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { /* FIXME!! check whether this need to be a conditional */ @@ -1380,36 +1361,6 @@ static inline bool arch_needs_pgtable_deposit(void) } extern void serialize_against_pte_lookup(struct mm_struct *mm); - -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - if (radix_enabled()) - return radix__pmd_mkdevmap(pmd); - return hash__pmd_mkdevmap(pmd); -} - -static inline pud_t pud_mkdevmap(pud_t pud) -{ - if (radix_enabled()) - return radix__pud_mkdevmap(pud); - BUG(); - return pud; -} - -static inline int pmd_devmap(pmd_t pmd) -{ - return pte_devmap(pmd_pte(pmd)); -} - -static inline int pud_devmap(pud_t pud) -{ - return pte_devmap(pud_pte(pud)); -} - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h index 8f55ff7..df23a82 100644 --- a/arch/powerpc/include/asm/book3s/64/radix.h +++ b/arch/powerpc/include/asm/book3s/64/radix.h @@ -264,7 +264,7 @@ static inline int radix__p4d_bad(p4d_t p4d) static inline int radix__pmd_trans_huge(pmd_t pmd) { - return (pmd_val(pmd) & (_PAGE_PTE | _PAGE_DEVMAP)) == _PAGE_PTE; + return (pmd_val(pmd) & _PAGE_PTE) == _PAGE_PTE; } static inline pmd_t radix__pmd_mkhuge(pmd_t pmd) @@ -274,7 +274,7 @@ static inline pmd_t radix__pmd_mkhuge(pmd_t pmd) static inline int radix__pud_trans_huge(pud_t pud) { - return (pud_val(pud) & (_PAGE_PTE | _PAGE_DEVMAP)) == _PAGE_PTE; + return (pud_val(pud) & _PAGE_PTE) == _PAGE_PTE; } static inline pud_t radix__pud_mkhuge(pud_t pud) @@ -315,16 +315,6 @@ static inline int radix__has_transparent_pud_hugepage(void) } #endif -static inline pmd_t radix__pmd_mkdevmap(pmd_t pmd) -{ - return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP)); -} - -static inline pud_t radix__pud_mkdevmap(pud_t pud) -{ - return __pud(pud_val(pud) | (_PAGE_PTE | _PAGE_DEVMAP)); -} - struct vmem_altmap; struct dev_pagemap; extern int __meminit radix__vmemmap_create_mapping(unsigned long start, diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 5aef2aa..e929578 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -44,7 +44,6 @@ config RISCV select ARCH_HAS_PREEMPT_LAZY select ARCH_HAS_PREPARE_SYNC_CORE_CMD select ARCH_HAS_PTDUMP - select ARCH_HAS_PTE_DEVMAP if 64BIT && MMU select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SET_DIRECT_MAP if MMU select ARCH_HAS_SET_MEMORY if MMU diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 0897dd9..8c36a88 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -398,24 +398,4 @@ static inline struct page *pgd_page(pgd_t pgd) #define p4d_offset p4d_offset p4d_t *p4d_offset(pgd_t *pgd, unsigned long address); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -static inline int pte_devmap(pte_t pte); -static inline pte_t pmd_pte(pmd_t pmd); - -static inline int pmd_devmap(pmd_t pmd) -{ - return pte_devmap(pmd_pte(pmd)); -} - -static inline int pud_devmap(pud_t pud) -{ - return 0; -} - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} -#endif - #endif /* _ASM_RISCV_PGTABLE_64_H */ diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h index a8f5205..179bd4a 100644 --- a/arch/riscv/include/asm/pgtable-bits.h +++ b/arch/riscv/include/asm/pgtable-bits.h @@ -19,7 +19,6 @@ #define _PAGE_SOFT (3 << 8) /* Reserved for software */ #define _PAGE_SPECIAL (1 << 8) /* RSW: 0x1 */ -#define _PAGE_DEVMAP (1 << 9) /* RSW, devmap */ #define _PAGE_TABLE _PAGE_PRESENT /* diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 050fdc4..915ba5f 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -399,13 +399,6 @@ static inline int pte_special(pte_t pte) return pte_val(pte) & _PAGE_SPECIAL; } -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pte_devmap(pte_t pte) -{ - return pte_val(pte) & _PAGE_DEVMAP; -} -#endif - /* static inline pte_t pte_rdprotect(pte_t pte) */ static inline pte_t pte_wrprotect(pte_t pte) @@ -447,11 +440,6 @@ static inline pte_t pte_mkspecial(pte_t pte) return __pte(pte_val(pte) | _PAGE_SPECIAL); } -static inline pte_t pte_mkdevmap(pte_t pte) -{ - return __pte(pte_val(pte) | _PAGE_DEVMAP); -} - static inline pte_t pte_mkhuge(pte_t pte) { return pte; @@ -763,11 +751,6 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd) return pte_pmd(pte_mkdirty(pmd_pte(pmd))); } -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - return pte_pmd(pte_mkdevmap(pmd_pte(pmd))); -} - static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 39ecaff..f801bdb 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -97,7 +97,6 @@ config X86 select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_PREEMPT_LAZY - select ARCH_HAS_PTE_DEVMAP if X86_64 select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_HW_PTE_YOUNG select ARCH_HAS_NONLEAF_PMD_YOUNG if PGTABLE_LEVELS > 2 diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 593f10a..77705be 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -308,16 +308,15 @@ static inline bool pmd_leaf(pmd_t pte) } #ifdef CONFIG_TRANSPARENT_HUGEPAGE -/* NOTE: when predicate huge page, consider also pmd_devmap, or use pmd_leaf */ static inline int pmd_trans_huge(pmd_t pmd) { - return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE; + return (pmd_val(pmd) & _PAGE_PSE) == _PAGE_PSE; } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static inline int pud_trans_huge(pud_t pud) { - return (pud_val(pud) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE; + return (pud_val(pud) & _PAGE_PSE) == _PAGE_PSE; } #endif @@ -327,24 +326,6 @@ static inline int has_transparent_hugepage(void) return boot_cpu_has(X86_FEATURE_PSE); } -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pmd_devmap(pmd_t pmd) -{ - return !!(pmd_val(pmd) & _PAGE_DEVMAP); -} - -#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static inline int pud_devmap(pud_t pud) -{ - return !!(pud_val(pud) & _PAGE_DEVMAP); -} -#else -static inline int pud_devmap(pud_t pud) -{ - return 0; -} -#endif - #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP static inline bool pmd_special(pmd_t pmd) { @@ -368,12 +349,6 @@ static inline pud_t pud_mkspecial(pud_t pud) return pud_set_flags(pud, _PAGE_SPECIAL); } #endif /* CONFIG_ARCH_SUPPORTS_PUD_PFNMAP */ - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} -#endif #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline pte_t pte_set_flags(pte_t pte, pteval_t set) @@ -534,11 +509,6 @@ static inline pte_t pte_mkspecial(pte_t pte) return pte_set_flags(pte, _PAGE_SPECIAL); } -static inline pte_t pte_mkdevmap(pte_t pte) -{ - return pte_set_flags(pte, _PAGE_SPECIAL|_PAGE_DEVMAP); -} - /* See comments above mksaveddirty_shift() */ static inline pmd_t pmd_mksaveddirty(pmd_t pmd) { @@ -610,11 +580,6 @@ static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd) return pmd_set_flags(pmd, _PAGE_DIRTY); } -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - return pmd_set_flags(pmd, _PAGE_DEVMAP); -} - static inline pmd_t pmd_mkhuge(pmd_t pmd) { return pmd_set_flags(pmd, _PAGE_PSE); @@ -680,11 +645,6 @@ static inline pud_t pud_mkdirty(pud_t pud) return pud_mksaveddirty(pud); } -static inline pud_t pud_mkdevmap(pud_t pud) -{ - return pud_set_flags(pud, _PAGE_DEVMAP); -} - static inline pud_t pud_mkhuge(pud_t pud) { return pud_set_flags(pud, _PAGE_PSE); @@ -1012,13 +972,6 @@ static inline int pte_present(pte_t a) return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE); } -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pte_devmap(pte_t a) -{ - return (pte_flags(a) & _PAGE_DEVMAP) == _PAGE_DEVMAP; -} -#endif - #define pte_accessible pte_accessible static inline bool pte_accessible(struct mm_struct *mm, pte_t a) { diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 4b80453..e4c7b51 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -33,7 +33,6 @@ #define _PAGE_BIT_CPA_TEST _PAGE_BIT_SOFTW1 #define _PAGE_BIT_UFFD_WP _PAGE_BIT_SOFTW2 /* userfaultfd wrprotected */ #define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking */ -#define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4 #ifdef CONFIG_X86_64 #define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW5 /* Saved Dirty bit (leaf) */ @@ -119,11 +118,9 @@ #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE) #define _PAGE_NX (_AT(pteval_t, 1) << _PAGE_BIT_NX) -#define _PAGE_DEVMAP (_AT(u64, 1) << _PAGE_BIT_DEVMAP) #define _PAGE_SOFTW4 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW4) #else #define _PAGE_NX (_AT(pteval_t, 0)) -#define _PAGE_DEVMAP (_AT(pteval_t, 0)) #define _PAGE_SOFTW4 (_AT(pteval_t, 0)) #endif @@ -152,7 +149,7 @@ #define _COMMON_PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ _PAGE_SPECIAL | _PAGE_ACCESSED | \ _PAGE_DIRTY_BITS | _PAGE_SOFT_DIRTY | \ - _PAGE_DEVMAP | _PAGE_CC | _PAGE_UFFD_WP) + _PAGE_CC | _PAGE_UFFD_WP) #define _PAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PAT) #define _HPAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_PAT_LARGE) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7b21b48..19950c2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2803,13 +2803,6 @@ static inline pud_t pud_mkspecial(pud_t pud) } #endif /* CONFIG_ARCH_SUPPORTS_PUD_PFNMAP */ -#ifndef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pte_devmap(pte_t pte) -{ - return 0; -} -#endif - extern pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, spinlock_t **ptl); static inline pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 00e4a06..1c377de 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1606,21 +1606,6 @@ static inline int pud_write(pud_t pud) } #endif /* pud_write */ -#if !defined(CONFIG_ARCH_HAS_PTE_DEVMAP) || !defined(CONFIG_TRANSPARENT_HUGEPAGE) -static inline int pmd_devmap(pmd_t pmd) -{ - return 0; -} -static inline int pud_devmap(pud_t pud) -{ - return 0; -} -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} -#endif - #if !defined(CONFIG_TRANSPARENT_HUGEPAGE) || \ !defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) static inline int pud_trans_huge(pud_t pud) @@ -1875,8 +1860,8 @@ typedef unsigned int pgtbl_mod_mask; * - It should contain a huge PFN, which points to a huge page larger than * PAGE_SIZE of the platform. The PFN format isn't important here. * - * - It should cover all kinds of huge mappings (e.g., pXd_trans_huge(), - * pXd_devmap(), or hugetlb mappings). + * - It should cover all kinds of huge mappings (i.e. pXd_trans_huge() + * or hugetlb mappings). */ #ifndef pgd_leaf #define pgd_leaf(x) false diff --git a/mm/Kconfig b/mm/Kconfig index fba9757..9c94180 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1040,9 +1040,6 @@ config ARCH_HAS_CURRENT_STACK_POINTER register alias named "current_stack_pointer", this config can be selected. -config ARCH_HAS_PTE_DEVMAP - bool - config ARCH_HAS_ZONE_DMA_SET bool @@ -1060,7 +1057,6 @@ config ZONE_DEVICE depends on MEMORY_HOTPLUG depends on MEMORY_HOTREMOVE depends on SPARSEMEM_VMEMMAP - depends on ARCH_HAS_PTE_DEVMAP select XARRAY_MULTI help diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index bc748f7..cf5ff92 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -348,12 +348,6 @@ static void __init pud_advanced_tests(struct pgtable_debug_args *args) vaddr &= HPAGE_PUD_MASK; pud = pfn_pud(args->pud_pfn, args->page_prot); - /* - * Some architectures have debug checks to make sure - * huge pud mapping are only found with devmap entries - * For now test with only devmap entries. - */ - pud = pud_mkdevmap(pud); set_pud_at(args->mm, vaddr, args->pudp, pud); flush_dcache_page(page); pudp_set_wrprotect(args->mm, vaddr, args->pudp); @@ -366,7 +360,6 @@ static void __init pud_advanced_tests(struct pgtable_debug_args *args) WARN_ON(!pud_none(pud)); #endif /* __PAGETABLE_PMD_FOLDED */ pud = pfn_pud(args->pud_pfn, args->page_prot); - pud = pud_mkdevmap(pud); pud = pud_wrprotect(pud); pud = pud_mkclean(pud); set_pud_at(args->mm, vaddr, args->pudp, pud); @@ -384,7 +377,6 @@ static void __init pud_advanced_tests(struct pgtable_debug_args *args) #endif /* __PAGETABLE_PMD_FOLDED */ pud = pfn_pud(args->pud_pfn, args->page_prot); - pud = pud_mkdevmap(pud); pud = pud_mkyoung(pud); set_pud_at(args->mm, vaddr, args->pudp, pud); flush_dcache_page(page); @@ -693,53 +685,6 @@ static void __init pmd_protnone_tests(struct pgtable_debug_args *args) static void __init pmd_protnone_tests(struct pgtable_debug_args *args) { } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static void __init pte_devmap_tests(struct pgtable_debug_args *args) -{ - pte_t pte = pfn_pte(args->fixed_pte_pfn, args->page_prot); - - pr_debug("Validating PTE devmap\n"); - WARN_ON(!pte_devmap(pte_mkdevmap(pte))); -} - -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -static void __init pmd_devmap_tests(struct pgtable_debug_args *args) -{ - pmd_t pmd; - - if (!has_transparent_hugepage()) - return; - - pr_debug("Validating PMD devmap\n"); - pmd = pfn_pmd(args->fixed_pmd_pfn, args->page_prot); - WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd))); -} - -#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static void __init pud_devmap_tests(struct pgtable_debug_args *args) -{ - pud_t pud; - - if (!has_transparent_pud_hugepage()) - return; - - pr_debug("Validating PUD devmap\n"); - pud = pfn_pud(args->fixed_pud_pfn, args->page_prot); - WARN_ON(!pud_devmap(pud_mkdevmap(pud))); -} -#else /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -static void __init pud_devmap_tests(struct pgtable_debug_args *args) { } -#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -#else /* CONFIG_TRANSPARENT_HUGEPAGE */ -static void __init pmd_devmap_tests(struct pgtable_debug_args *args) { } -static void __init pud_devmap_tests(struct pgtable_debug_args *args) { } -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -#else -static void __init pte_devmap_tests(struct pgtable_debug_args *args) { } -static void __init pmd_devmap_tests(struct pgtable_debug_args *args) { } -static void __init pud_devmap_tests(struct pgtable_debug_args *args) { } -#endif /* CONFIG_ARCH_HAS_PTE_DEVMAP */ - static void __init pte_soft_dirty_tests(struct pgtable_debug_args *args) { pte_t pte = pfn_pte(args->fixed_pte_pfn, args->page_prot); @@ -1341,10 +1286,6 @@ static int __init debug_vm_pgtable(void) pte_protnone_tests(&args); pmd_protnone_tests(&args); - pte_devmap_tests(&args); - pmd_devmap_tests(&args); - pud_devmap_tests(&args); - pte_soft_dirty_tests(&args); pmd_soft_dirty_tests(&args); pte_swap_soft_dirty_tests(&args); diff --git a/mm/hmm.c b/mm/hmm.c index 5037f98..1fbbeea 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -393,8 +393,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return 0; } -#if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && \ - defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) +#if defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range, pud_t pud) { diff --git a/mm/madvise.c b/mm/madvise.c index e01e93e..1947d8a 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -1066,7 +1066,7 @@ static int guard_install_pud_entry(pud_t *pud, unsigned long addr, pud_t pudval = pudp_get(pud); /* If huge return >0 so we abort the operation + zap. */ - return pud_trans_huge(pudval) || pud_devmap(pudval); + return pud_trans_huge(pudval); } static int guard_install_pmd_entry(pmd_t *pmd, unsigned long addr, @@ -1075,7 +1075,7 @@ static int guard_install_pmd_entry(pmd_t *pmd, unsigned long addr, pmd_t pmdval = pmdp_get(pmd); /* If huge return >0 so we abort the operation + zap. */ - return pmd_trans_huge(pmdval) || pmd_devmap(pmdval); + return pmd_trans_huge(pmdval); } static int guard_install_pte_entry(pte_t *pte, unsigned long addr, @@ -1186,7 +1186,7 @@ static int guard_remove_pud_entry(pud_t *pud, unsigned long addr, pud_t pudval = pudp_get(pud); /* If huge, cannot have guard pages present, so no-op - skip. */ - if (pud_trans_huge(pudval) || pud_devmap(pudval)) + if (pud_trans_huge(pudval)) walk->action = ACTION_CONTINUE; return 0; @@ -1198,7 +1198,7 @@ static int guard_remove_pmd_entry(pmd_t *pmd, unsigned long addr, pmd_t pmdval = pmdp_get(pmd); /* If huge, cannot have guard pages present, so no-op - skip. */ - if (pmd_trans_huge(pmdval) || pmd_devmap(pmdval)) + if (pmd_trans_huge(pmdval)) walk->action = ACTION_CONTINUE; return 0; From patchwork Wed Feb 19 05:04:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981500 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2086.outbound.protection.outlook.com [40.107.220.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA9F31B87CC; Wed, 19 Feb 2025 05:06:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.86 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941577; cv=fail; b=AfJUGchCwncZx59L4l97dYi/EIR/Nc1QO+vIiBV24HbdSzMoqKPUI6IBh58hPhQ0qsS3uh4O2x+yMHPhi0YbDsvPZsSkRoNhydj4wo6us51vu55h56x5W1pS7EsN4/FEjDhGBoFRDiVBQnnLaDhbA7afbYdR9IunzrkSrwbSAXA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941577; c=relaxed/simple; bh=5D9P1ocKePrImezLxgOounM7FIJf0/cnCerS3CiblbE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=AggU+hhGxMSStYPv+cP4jOdMx63qh0BNQHC3jDmGLAelh9C5GFvDjLaq+CG22WCZlKOsLBR41/M9p+kYuzsFCl4g5OYhlVkmgKtgbynIt83U3oKklUb1T5wdCYs78LSZOZCHtGqTIeTJaYZ107CjhpUe9ScIYyLr6N/W5oitw5w= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=fse/9/YK; arc=fail smtp.client-ip=40.107.220.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="fse/9/YK" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=AWy3RzSb/uUrY9dVXFWeXp1V5pfry5R0Rygtg8lvh0Q2ZDsBAm4EW1eb/gKiYPma7D2fSz9TFClvslFaOn0vRhnoTweJE3NVoI6J75pja0hjlRDjlisv7iD4ormrS7IVsnQ1tFyFQzpLNkH26FSk7H5jiRbZfc4UnZC0djuLPPmLMiq2O/eSMlOsjY+e8SwGaCGD2J5ef5Vqw/h3WHWGW2zYliVsRCFel9d6vwmEoUgCNvZ/294R3KwOrIdjbku0OJ2aWHI2SqJOG5Szr3DQ/U4AHUma4Ijkg8U2GB3fK4WkabrWBTkkNqxFWZ4u61zU2kMfwC08pg4c/vU5G9EEhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4wgq6qqf2T6kK6VlilJsaAv7YXfRMfbieQRx3Ywkvos=; b=xNSDwL+3f9GNgweXkHL4YChYfl6ftvyUmNM+jZ7TDWKvbIM55KD9m4ZH2NivlsmdFd2DUtJ5J5Zphb1NCX/IncFyoohuhK0+i1Tr2l56fpz0cqa3rQ+sKCj22O7fVqyQ1Xb911sZzBXArpnFhwsJ93A3Ao87YWABivBHdboGRh4LxkfCkMcosFYYX5lVFQ8UXoQjd+VeVnXkSHuV30ZUbtrqfDWf7zJlmXCW1wLpc3BrbnroeyG386fyKxlkAxw2WABDEBKnJK6sG0Iesf7ZOFZnPSJ7DHegifobfMK6i+VgVsxI9E/nGOaNrur9WXE87KFnLvXY7WTZHciy2Ri2Gg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4wgq6qqf2T6kK6VlilJsaAv7YXfRMfbieQRx3Ywkvos=; b=fse/9/YKmrerbFcwrsqTZQ9NTTBT2YjFP741lGFSyjYnw5pNh/+GjLLqE5y9A6MUV5nuyRu8U4qLp7NTJUwdk/vZjTiH5FV5TqsWJHAlKVwtk+NjadtNhYsVknenZktOEdz10/YGNnm9X+nUBswWc+5tWH06Uj3HAQRpqGQ7Ej26mjSneicp5lHRhl0sUQyIyesozBS5wMSS4lpiWyJH19JtvCF0Naw6Lz4VAdjziDF84VzSicpoPDN9uODLBz0ev/6IQ337Kgwpb++svJsFWH7Ts0h2w8DZV9Lc4aADh4/hupNG2jrzHcQQ5xk6Ba8MzSJCrHOlUjf7tbiOuVakVA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SJ2PR12MB8875.namprd12.prod.outlook.com (2603:10b6:a03:543::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.14; Wed, 19 Feb 2025 05:06:11 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:06:11 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 11/12] mm: Remove callers of pfn_t functionality Date: Wed, 19 Feb 2025 16:04:55 +1100 Message-ID: <138bd6e485b5e37280762820fc51e7a23bf2a3dd.1739941374.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0159.ausprd01.prod.outlook.com (2603:10c6:10:1ba::17) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SJ2PR12MB8875:EE_ X-MS-Office365-Filtering-Correlation-Id: a551c6c6-3f8e-498f-66a6-08dd50a32042 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016; X-Microsoft-Antispam-Message-Info: e8XCBcQc0vF0g12wMm18WH+eWfaAsQMRIimtEm/T05BZ6fkb/lxeF8YC4i0/7QEOLRN0q/mG+Jc3tt7v9NVEQ/rjrzOpsugJnncGSddjQm1hOr+EDvESHA3ZJwbcrRiwNWhOkKFuFpCA+jWD/EbxMxqi3rKkoSTTxFz5vr+5oWy5XmE9TJdzjcz1JN8pcnTAQ+7KH+Q3frLluuQHyLPCIry390vaJH9ybVAcTu3nSpV7L0YI7qRlqIEf8qmen7IPGCPCOyfijIUzXf65iK85jyZeQazvqmCqZheoShg1uT1GBCVDnPxgdC1qvY2eDBJgu+lEfWB+1VsLvwU07OEzjhwzGlwQmV1tUMXQ3oMhZbRgvbsDDGHq3uanNDTU3Z2nVdGdLd8L4Lm4VKuXTK8l+/jJcKWO+tKNObGKsnhFeNUrPfvU9i/TvZNa8EHRB2V+qDTixbZyxOspThG3GgnamYkjZ+DqVETeHjn/F0pDjCbzko8a3otKpAURU4I+rDAY0H3TozXAwvuvYo2XCuFDpXopI9wMC6A2ZQkR9/t29SfBqHnf9ozY8SWdaaZ+9Xe6bsfK02zq1ry3GyfS+SZa47+dagxw1BxEDGtHS9kMMyJ3yS8IsMpz/H2b+aQuESXXJOSeokxcsauS58vNyE46G9ht4fzDiNGez+6Pd/oxEAGeU3zu7u0CzAZSkIYKLTHjh8+z5iGLivJXrli1QLVYorJC2npQ6eFPA+DlFRQZaCXDbNJUaX8Wty764eatp4Sq9pt3HG3iwp8Qq3Ht/ICiSoejpkzm1Lyqrb3Ws5pdMXKolaBw8DiH1sGV8Y3ANnzLYrnwnLcC5GQ6jWJKzSerJe2ao2WKUbqj4LwC6ElkPiHLkwfTOi7yNC1k6UkPHkV3+H03cO8xht0kJYZd+JaFohw3o0nqiUWWMnFeHoLw1qRfSHmaGwFza7leswv59HKpK6BbsvZPz48y+mZn27jOybFWdf6c/iXkdLqJiuEJcz6XOEyeI85mNEcQOt1e1WS5czBh5GdtqWukiVMrjwDl8YqvX8zmW8BAqWeQyKzhKeXy3KA0Lrc/TYO+r8JlOH5KhIR6rKQmdlB6qMKgaR6xwFAtl7Qq9fVv+hEJy9cmdyuU/d2OX1hK7gq65aYsBL3sM/PZ5XSb4BqvwJz9Avo/YpvtAKlKGKz97su/prd/FQbDcl1r8JK185A7OZxPFwu7myZ6M0f6zq07y2Z5CI2kMRWUTpnEFDGa5ZcHJWbHyzUiClA8ob7dNwx7qwOP8gwyZGaVPXVIL+5YBwU7tCduXcBykO4XB9mWh4Iuc3BBOuYhOMkjssNJ1c+BhVfXhE8+UgL4y3hv/rdYSBxet/KuZUJdjs7PMIjy9tEKghFODh7wQ+WQxHH/JHyTKW1GxzwC X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: hVi7VyH9tJuGS8Q1Ats+7sMHQrw8wksxlhlhUdD5r28jaGfddnY69LUok+tThauk+4SWtvxGlRKtEe0R3rYq5dL5KJglIBo9PvgQ3NVC220GNxGdG5UxVNdwVeRXScMlQQc7/WXxsHNGeuR/wcgnkvmoebE6GcE0HXX1N9LtwpxGCxwYkqUjNeEdf4nUMxrPqG+Xd+mwnEmMFtGSGKwkSqTY6M2vKpx3QJstGjy1GmkBWPrnJ7tg8u+3HSOwMZzY6EeVRW7pjfO3W0NffO0gnYvxe5lQFr9bScqplCWBoxw5LOcQJ5AAIJwcXuFcdeGXQunHqeMZguBgHtCSd1HWRGe0/pxELdXu0XZj5LhcaNrIneKdoKQi3DLcSdzJb0eQ45Jy37TsBlwK+9KHERJ8YEMRh+o1SSJjOjHl+HSocFirNgrPIIkAD5UtMk8FfCyRzEiW4JgdnoWOeUtR+SPnoo8GripHGiV3EESFfQpdNUKyGXYAjUL5t9JaH2R5zV1iIqngfeaWB0prXLOYiIGfZ35h/6g1A6aPNCQnyt4hZNycohHOdC+r9y1THT5tsTiNSYHQvb5BuDpQv58h+Gurv1ApH80RdrXRg/MxTooLvrfoymGBe0e8iMKWe9M+fbSiLGPHuUlzdv9a8BqpjYcJtfJ2Y+t2lJE9hQ8o7Rsqpuhu2zMgQrhMeM8RLPR91qyMAxIJEr3NexKNKk3chr0hD3hUI/BXIKEeqv9hLg2qNalJWO5T7hkMu1VtWDWWr9i5JbQ+5b+ofZkvKOlR1mVcXnpdeeDudDMu81CFH/wjyg+HiYYENLbpYmYbLzHjdCnlckvTldwzgxzkA6soPdMq6NNq2PrfcchRU9DC0uHTSC1G+ZHyBs/GCl1Mwqt8r/imw+rJ+TzuM+NEyMUuOW0OAkSqPV1UyAsQ9kopcASlvgu0BunVYL5/sc5TnFePEwzRZxRhqOAUEdqR6Hy/54p07KIOqSQ70Xp5zNzoMoUhg0ClubQSNZNPbf2tKVs4rAn9GC6PuKq+S4zzV1KQ0GN/qzu/kBfmMaIUKdpGf9VaaVCPvR690oti/6dk60Kk3TI+OwCoSrvTP+8HOQ7w7MgFUM/X09bjCqrAeloiF6rtDsiKP+OHAA8HRsei/wE5kPFL0cGJdtP380wPTaqf4T94tMZ8iAKb7b8SfBJN/Mu09NEygS+XdWi4LYxqB/oyc49Lgv0z1WRiptPzuvV8SVR06li4pLsMjydRrQuUzqHvQV9mxXxHdHVr72BU0NQYYS/k92YXnq4kCMaoaiAQEj9IYOJq0jOmjczg7+NT6oY+pKKojSSsU/bnYm1u3ZAx9ECx07GYPt9/dEczHof7QmxjYZMIIrZ+KXDSAkz9t4q6390CUhbuF7/xmiaOoTCco/9JxgL2zpIfluhha6WLtOJXcjZQLWXybuvrmk12LThU0/JX9HDWrMA4YFI0wYEkfpoojo4uw1tr/wxcvN0xzXRhmKtuqMgjM3lVO3XbV3pz8/huvuciDoVmm/dy+GyeBDDoNiu19CTTehEIy58ccspgQWh/G5s4lU0kBwABaupMvGRaxQR4bf55or0g4FPcqWNV X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: a551c6c6-3f8e-498f-66a6-08dd50a32042 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:06:11.2056 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: IOjRqIemDw5g7wRvNqV/0UnpKP2tlosJixn1cYCjNfoKbhpiM6ikaSeqDLONI0WXnRGA+bfLrcR109NySVY3bQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8875 All PFN_* pfn_t flags have been removed. Therefore there is no longer a need for the pfn_t type and all uses can be replaced with normal pfns. Signed-off-by: Alistair Popple Reviewed-by: Christoph Hellwig --- arch/x86/mm/pat/memtype.c | 6 +- drivers/dax/device.c | 23 +++---- drivers/dax/hmem/hmem.c | 1 +- drivers/dax/kmem.c | 1 +- drivers/dax/pmem.c | 1 +- drivers/dax/super.c | 3 +- drivers/gpu/drm/exynos/exynos_drm_gem.c | 1 +- drivers/gpu/drm/gma500/fbdev.c | 3 +- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 1 +- drivers/gpu/drm/msm/msm_gem.c | 1 +- drivers/gpu/drm/omapdrm/omap_gem.c | 6 +-- drivers/gpu/drm/v3d/v3d_bo.c | 1 +- drivers/md/dm-linear.c | 2 +- drivers/md/dm-log-writes.c | 2 +- drivers/md/dm-stripe.c | 2 +- drivers/md/dm-target.c | 2 +- drivers/md/dm-writecache.c | 9 +-- drivers/md/dm.c | 2 +- drivers/nvdimm/pmem.c | 8 +-- drivers/nvdimm/pmem.h | 4 +- drivers/s390/block/dcssblk.c | 9 +-- drivers/vfio/pci/vfio_pci_core.c | 5 +- fs/cramfs/inode.c | 5 +- fs/dax.c | 50 +++++++-------- fs/ext4/file.c | 2 +- fs/fuse/dax.c | 3 +- fs/fuse/virtio_fs.c | 5 +- fs/xfs/xfs_file.c | 2 +- include/linux/dax.h | 9 +-- include/linux/device-mapper.h | 2 +- include/linux/huge_mm.h | 6 +- include/linux/mm.h | 4 +- include/linux/pfn.h | 9 +--- include/linux/pfn_t.h | 85 +------------------------- include/linux/pgtable.h | 4 +- include/trace/events/fs_dax.h | 12 +--- mm/debug_vm_pgtable.c | 1 +- mm/huge_memory.c | 27 +++----- mm/memory.c | 31 ++++----- mm/memremap.c | 1 +- mm/migrate.c | 1 +- tools/testing/nvdimm/pmem-dax.c | 6 +- tools/testing/nvdimm/test/iomap.c | 7 +-- 43 files changed, 119 insertions(+), 246 deletions(-) delete mode 100644 include/linux/pfn_t.h diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c index feb8cc6..508f807 100644 --- a/arch/x86/mm/pat/memtype.c +++ b/arch/x86/mm/pat/memtype.c @@ -36,7 +36,6 @@ #include #include #include -#include #include #include #include @@ -1053,7 +1052,8 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, return 0; } -void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn) +void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, + unsigned long pfn) { enum page_cache_mode pcm; @@ -1061,7 +1061,7 @@ void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn) return; /* Set prot based on lookup */ - pcm = lookup_memtype(pfn_t_to_phys(pfn)); + pcm = lookup_memtype(PFN_PHYS(pfn)); *prot = __pgprot((pgprot_val(*prot) & (~_PAGE_CACHE_MASK)) | cachemode2protval(pcm)); } diff --git a/drivers/dax/device.c b/drivers/dax/device.c index 328231c..2bb40a6 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -4,7 +4,6 @@ #include #include #include -#include #include #include #include @@ -73,7 +72,7 @@ __weak phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff, return -1; } -static void dax_set_mapping(struct vm_fault *vmf, pfn_t pfn, +static void dax_set_mapping(struct vm_fault *vmf, unsigned long pfn, unsigned long fault_size) { unsigned long i, nr_pages = fault_size / PAGE_SIZE; @@ -89,7 +88,7 @@ static void dax_set_mapping(struct vm_fault *vmf, pfn_t pfn, ALIGN_DOWN(vmf->address, fault_size)); for (i = 0; i < nr_pages; i++) { - struct folio *folio = pfn_folio(pfn_t_to_pfn(pfn) + i); + struct folio *folio = pfn_folio(pfn + i); if (folio->mapping) continue; @@ -104,7 +103,7 @@ static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax, { struct device *dev = &dev_dax->dev; phys_addr_t phys; - pfn_t pfn; + unsigned long pfn; unsigned int fault_size = PAGE_SIZE; if (check_vma(dev_dax, vmf->vma, __func__)) @@ -125,11 +124,11 @@ static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax, return VM_FAULT_SIGBUS; } - pfn = phys_to_pfn_t(phys, 0); + pfn = PHYS_PFN(phys); dax_set_mapping(vmf, pfn, fault_size); - return vmf_insert_page_mkwrite(vmf, pfn_t_to_page(pfn), + return vmf_insert_page_mkwrite(vmf, pfn_to_page(pfn), vmf->flags & FAULT_FLAG_WRITE); } @@ -140,7 +139,7 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax, struct device *dev = &dev_dax->dev; phys_addr_t phys; pgoff_t pgoff; - pfn_t pfn; + unsigned long pfn; unsigned int fault_size = PMD_SIZE; if (check_vma(dev_dax, vmf->vma, __func__)) @@ -169,11 +168,11 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax, return VM_FAULT_SIGBUS; } - pfn = phys_to_pfn_t(phys, 0); + pfn = PHYS_PFN(phys); dax_set_mapping(vmf, pfn, fault_size); - return vmf_insert_folio_pmd(vmf, page_folio(pfn_t_to_page(pfn)), + return vmf_insert_folio_pmd(vmf, page_folio(pfn_to_page(pfn)), vmf->flags & FAULT_FLAG_WRITE); } @@ -185,7 +184,7 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax, struct device *dev = &dev_dax->dev; phys_addr_t phys; pgoff_t pgoff; - pfn_t pfn; + unsigned long pfn; unsigned int fault_size = PUD_SIZE; @@ -215,11 +214,11 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax, return VM_FAULT_SIGBUS; } - pfn = phys_to_pfn_t(phys, 0); + pfn = PHYS_PFN(phys); dax_set_mapping(vmf, pfn, fault_size); - return vmf_insert_folio_pud(vmf, page_folio(pfn_t_to_page(pfn)), + return vmf_insert_folio_pud(vmf, page_folio(pfn_to_page(pfn)), vmf->flags & FAULT_FLAG_WRITE); } #else diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c index 5e7c53f..c18451a 100644 --- a/drivers/dax/hmem/hmem.c +++ b/drivers/dax/hmem/hmem.c @@ -2,7 +2,6 @@ #include #include #include -#include #include #include "../bus.h" diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index e97d47f..87b5321 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -5,7 +5,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c index c8ebf4e..bee9306 100644 --- a/drivers/dax/pmem.c +++ b/drivers/dax/pmem.c @@ -2,7 +2,6 @@ /* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */ #include #include -#include #include "../nvdimm/pfn.h" #include "../nvdimm/nd.h" #include "bus.h" diff --git a/drivers/dax/super.c b/drivers/dax/super.c index e16d1d4..54c480e 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -7,7 +7,6 @@ #include #include #include -#include #include #include #include @@ -148,7 +147,7 @@ enum dax_device_flags { * pages accessible at the device relative @pgoff. */ long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, - enum dax_access_mode mode, void **kaddr, pfn_t *pfn) + enum dax_access_mode mode, void **kaddr, unsigned long *pfn) { long avail; diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c index 4787fee..84b2172 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c @@ -7,7 +7,6 @@ #include -#include #include #include diff --git a/drivers/gpu/drm/gma500/fbdev.c b/drivers/gpu/drm/gma500/fbdev.c index 109efdc..68b825f 100644 --- a/drivers/gpu/drm/gma500/fbdev.c +++ b/drivers/gpu/drm/gma500/fbdev.c @@ -6,7 +6,6 @@ **************************************************************************/ #include -#include #include #include @@ -33,7 +32,7 @@ static vm_fault_t psb_fbdev_vm_fault(struct vm_fault *vmf) vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); for (i = 0; i < page_num; ++i) { - err = vmf_insert_mixed(vma, address, __pfn_to_pfn_t(pfn, 0)); + err = vmf_insert_mixed(vma, address, pfn); if (unlikely(err & VM_FAULT_ERROR)) break; address += PAGE_SIZE; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index 21274aa..d6ac557 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -6,7 +6,6 @@ #include #include -#include #include #include diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ebc9ba6..1c27500 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -9,7 +9,6 @@ #include #include #include -#include #include #include diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c index 9df05b2..381552b 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem.c +++ b/drivers/gpu/drm/omapdrm/omap_gem.c @@ -8,7 +8,6 @@ #include #include #include -#include #include #include @@ -371,7 +370,7 @@ static vm_fault_t omap_gem_fault_1d(struct drm_gem_object *obj, VERB("Inserting %p pfn %lx, pa %lx", (void *)vmf->address, pfn, pfn << PAGE_SHIFT); - return vmf_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, 0)); + return vmf_insert_mixed(vma, vmf->address, pfn); } /* Special handling for the case of faulting in 2d tiled buffers */ @@ -466,8 +465,7 @@ static vm_fault_t omap_gem_fault_2d(struct drm_gem_object *obj, pfn, pfn << PAGE_SHIFT); for (i = n; i > 0; i--) { - ret = vmf_insert_mixed(vma, - vaddr, __pfn_to_pfn_t(pfn, 0)); + ret = vmf_insert_mixed(vma, vaddr, pfn); if (ret & VM_FAULT_ERROR) break; pfn += priv->usergart[fmt].stride_pfn; diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c index bb78155..c41476d 100644 --- a/drivers/gpu/drm/v3d/v3d_bo.c +++ b/drivers/gpu/drm/v3d/v3d_bo.c @@ -16,7 +16,6 @@ */ #include -#include #include #include "v3d_drv.h" diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c index 66318ab..bc2f163 100644 --- a/drivers/md/dm-linear.c +++ b/drivers/md/dm-linear.c @@ -168,7 +168,7 @@ static struct dax_device *linear_dax_pgoff(struct dm_target *ti, pgoff_t *pgoff) static long linear_dax_direct_access(struct dm_target *ti, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn) + unsigned long *pfn) { struct dax_device *dax_dev = linear_dax_pgoff(ti, &pgoff); diff --git a/drivers/md/dm-log-writes.c b/drivers/md/dm-log-writes.c index 8d7df83..4c6aed7 100644 --- a/drivers/md/dm-log-writes.c +++ b/drivers/md/dm-log-writes.c @@ -891,7 +891,7 @@ static struct dax_device *log_writes_dax_pgoff(struct dm_target *ti, static long log_writes_dax_direct_access(struct dm_target *ti, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn) + unsigned long *pfn) { struct dax_device *dax_dev = log_writes_dax_pgoff(ti, &pgoff); diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c index 3786ac6..d7e93c8 100644 --- a/drivers/md/dm-stripe.c +++ b/drivers/md/dm-stripe.c @@ -316,7 +316,7 @@ static struct dax_device *stripe_dax_pgoff(struct dm_target *ti, pgoff_t *pgoff) static long stripe_dax_direct_access(struct dm_target *ti, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn) + unsigned long *pfn) { struct dax_device *dax_dev = stripe_dax_pgoff(ti, &pgoff); diff --git a/drivers/md/dm-target.c b/drivers/md/dm-target.c index 652627a..2af5a95 100644 --- a/drivers/md/dm-target.c +++ b/drivers/md/dm-target.c @@ -255,7 +255,7 @@ static void io_err_io_hints(struct dm_target *ti, struct queue_limits *limits) static long io_err_dax_direct_access(struct dm_target *ti, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn) + unsigned long *pfn) { return -EIO; } diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c index 7ce8847..27d240d 100644 --- a/drivers/md/dm-writecache.c +++ b/drivers/md/dm-writecache.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include "dm-io-tracker.h" @@ -256,7 +255,7 @@ static int persistent_memory_claim(struct dm_writecache *wc) int r; loff_t s; long p, da; - pfn_t pfn; + unsigned long pfn; int id; struct page **pages; sector_t offset; @@ -290,7 +289,7 @@ static int persistent_memory_claim(struct dm_writecache *wc) r = da; goto err2; } - if (!pfn_t_has_page(pfn)) { + if (!pfn_valid(pfn)) { wc->memory_map = NULL; r = -EOPNOTSUPP; goto err2; @@ -314,12 +313,12 @@ static int persistent_memory_claim(struct dm_writecache *wc) r = daa ? daa : -EINVAL; goto err3; } - if (!pfn_t_has_page(pfn)) { + if (!pfn_valid(pfn)) { r = -EOPNOTSUPP; goto err3; } while (daa-- && i < p) { - pages[i++] = pfn_t_to_page(pfn); + pages[i++] = pfn_to_page(pfn); pfn.val++; if (!(i & 15)) cond_resched(); diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 4d1e428..1dfc97b 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1232,7 +1232,7 @@ static struct dm_target *dm_dax_get_live_target(struct mapped_device *md, static long dm_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn) + unsigned long *pfn) { struct mapped_device *md = dax_get_private(dax_dev); sector_t sector = pgoff * PAGE_SECTORS; diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 785b2d2..ae4f5a4 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include #include @@ -242,7 +241,7 @@ static void pmem_submit_bio(struct bio *bio) /* see "strong" declaration in tools/testing/nvdimm/pmem-dax.c */ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn) + unsigned long *pfn) { resource_size_t offset = PFN_PHYS(pgoff) + pmem->data_offset; sector_t sector = PFN_PHYS(pgoff) >> SECTOR_SHIFT; @@ -254,7 +253,7 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, if (kaddr) *kaddr = pmem->virt_addr + offset; if (pfn) - *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); + *pfn = PHYS_PFN(pmem->phys_addr + offset); if (bb->count && badblocks_check(bb, sector, num, &first_bad, &num_bad)) { @@ -303,7 +302,7 @@ static int pmem_dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff, static long pmem_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, - void **kaddr, pfn_t *pfn) + void **kaddr, unsigned long *pfn) { struct pmem_device *pmem = dax_get_private(dax_dev); @@ -513,7 +512,6 @@ static int pmem_attach_disk(struct device *dev, pmem->disk = disk; pmem->pgmap.owner = pmem; - pmem->pfn_flags = 0; if (is_nd_pfn(dev)) { pmem->pgmap.type = MEMORY_DEVICE_FS_DAX; pmem->pgmap.ops = &fsdax_pagemap_ops; diff --git a/drivers/nvdimm/pmem.h b/drivers/nvdimm/pmem.h index 392b0b3..a48509f 100644 --- a/drivers/nvdimm/pmem.h +++ b/drivers/nvdimm/pmem.h @@ -5,7 +5,6 @@ #include #include #include -#include #include enum dax_access_mode; @@ -16,7 +15,6 @@ struct pmem_device { phys_addr_t phys_addr; /* when non-zero this device is hosting a 'pfn' instance */ phys_addr_t data_offset; - u64 pfn_flags; void *virt_addr; /* immutable base size of the namespace */ size_t size; @@ -31,7 +29,7 @@ struct pmem_device { long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn); + unsigned long *pfn); #ifdef CONFIG_MEMORY_FAILURE static inline bool test_and_clear_pmem_poison(struct page *page) diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c index 02d7a21..1dee7e8 100644 --- a/drivers/s390/block/dcssblk.c +++ b/drivers/s390/block/dcssblk.c @@ -17,7 +17,6 @@ #include #include #include -#include #include #include #include @@ -33,7 +32,7 @@ static void dcssblk_release(struct gendisk *disk); static void dcssblk_submit_bio(struct bio *bio); static long dcssblk_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn); + unsigned long *pfn); static char dcssblk_segments[DCSSBLK_PARM_LEN] = "\0"; @@ -914,7 +913,7 @@ dcssblk_submit_bio(struct bio *bio) static long __dcssblk_direct_access(struct dcssblk_dev_info *dev_info, pgoff_t pgoff, - long nr_pages, void **kaddr, pfn_t *pfn) + long nr_pages, void **kaddr, unsigned long *pfn) { resource_size_t offset = pgoff * PAGE_SIZE; unsigned long dev_sz; @@ -923,7 +922,7 @@ __dcssblk_direct_access(struct dcssblk_dev_info *dev_info, pgoff_t pgoff, if (kaddr) *kaddr = __va(dev_info->start + offset); if (pfn) - *pfn = __pfn_to_pfn_t(PFN_DOWN(dev_info->start + offset), 0); + *pfn = PFN_DOWN(dev_info->start + offset); return (dev_sz - offset) / PAGE_SIZE; } @@ -931,7 +930,7 @@ __dcssblk_direct_access(struct dcssblk_dev_info *dev_info, pgoff_t pgoff, static long dcssblk_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn) + unsigned long *pfn) { struct dcssblk_dev_info *dev_info = dax_get_private(dax_dev); diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index 383e034..d3b1966 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include #include @@ -1677,12 +1676,12 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf, break; #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP case PMD_ORDER: - ret = vmf_insert_pfn_pmd(vmf, __pfn_to_pfn_t(pfn, 0), false); + ret = vmf_insert_pfn_pmd(vmf, pfn, false); break; #endif #ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP case PUD_ORDER: - ret = vmf_insert_pfn_pud(vmf, __pfn_to_pfn_t(pfn, 0), false); + ret = vmf_insert_pfn_pud(vmf, pfn, false); break; #endif default: diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c index 820a664..b002e9b 100644 --- a/fs/cramfs/inode.c +++ b/fs/cramfs/inode.c @@ -17,7 +17,6 @@ #include #include #include -#include #include #include #include @@ -412,8 +411,8 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma) for (i = 0; i < pages && !ret; i++) { vm_fault_t vmf; unsigned long off = i * PAGE_SIZE; - pfn_t pfn = phys_to_pfn_t(address + off, 0); - vmf = vmf_insert_mixed(vma, vma->vm_start + off, pfn); + vmf = vmf_insert_mixed(vma, vma->vm_start + off, + address + off); if (vmf & VM_FAULT_ERROR) ret = vm_fault_to_errno(vmf, 0); } diff --git a/fs/dax.c b/fs/dax.c index e26fb6b..6411088 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include #include @@ -76,9 +75,9 @@ static struct folio *dax_to_folio(void *entry) return page_folio(pfn_to_page(dax_to_pfn(entry))); } -static void *dax_make_entry(pfn_t pfn, unsigned long flags) +static void *dax_make_entry(unsigned long pfn, unsigned long flags) { - return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT)); + return xa_mk_value(flags | (pfn << DAX_SHIFT)); } static bool dax_is_locked(void *entry) @@ -712,7 +711,7 @@ static void *grab_mapping_entry(struct xa_state *xas, if (order > 0) flags |= DAX_PMD; - entry = dax_make_entry(pfn_to_pfn_t(0), flags); + entry = dax_make_entry(0, flags); dax_lock_entry(xas, entry); if (xas_error(xas)) goto out_unlock; @@ -1046,7 +1045,7 @@ static bool dax_fault_is_synchronous(const struct iomap_iter *iter, * appropriate. */ static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf, - const struct iomap_iter *iter, void *entry, pfn_t pfn, + const struct iomap_iter *iter, void *entry, unsigned long pfn, unsigned long flags) { struct address_space *mapping = vmf->vma->vm_file->f_mapping; @@ -1245,7 +1244,7 @@ int dax_writeback_mapping_range(struct address_space *mapping, EXPORT_SYMBOL_GPL(dax_writeback_mapping_range); static int dax_iomap_direct_access(const struct iomap *iomap, loff_t pos, - size_t size, void **kaddr, pfn_t *pfnp) + size_t size, void **kaddr, unsigned long *pfnp) { pgoff_t pgoff = dax_iomap_pgoff(iomap, pos); int id, rc = 0; @@ -1263,7 +1262,7 @@ static int dax_iomap_direct_access(const struct iomap *iomap, loff_t pos, rc = -EINVAL; if (PFN_PHYS(length) < size) goto out; - if (pfn_t_to_pfn(*pfnp) & (PHYS_PFN(size)-1)) + if (*pfnp & (PHYS_PFN(size)-1)) goto out; rc = 0; @@ -1367,12 +1366,12 @@ static vm_fault_t dax_load_hole(struct xa_state *xas, struct vm_fault *vmf, { struct inode *inode = iter->inode; unsigned long vaddr = vmf->address; - pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr)); + unsigned long pfn = my_zero_pfn(vaddr); vm_fault_t ret; *entry = dax_insert_entry(xas, vmf, iter, *entry, pfn, DAX_ZERO_PAGE); - ret = vmf_insert_page_mkwrite(vmf, pfn_t_to_page(pfn), false); + ret = vmf_insert_page_mkwrite(vmf, pfn_to_page(pfn), false); trace_dax_load_hole(inode, vmf, ret); return ret; } @@ -1389,14 +1388,14 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, struct folio *zero_folio; spinlock_t *ptl; pmd_t pmd_entry; - pfn_t pfn; + unsigned long pfn; zero_folio = mm_get_huge_zero_folio(vmf->vma->vm_mm); if (unlikely(!zero_folio)) goto fallback; - pfn = page_to_pfn_t(&zero_folio->page); + pfn = page_to_pfn(&zero_folio->page); *entry = dax_insert_entry(xas, vmf, iter, *entry, pfn, DAX_PMD | DAX_ZERO_PAGE); @@ -1786,7 +1785,8 @@ static vm_fault_t dax_fault_return(int error) * insertion for now and return the pfn so that caller can insert it after the * fsync is done. */ -static vm_fault_t dax_fault_synchronous_pfnp(pfn_t *pfnp, pfn_t pfn) +static vm_fault_t dax_fault_synchronous_pfnp(unsigned long *pfnp, + unsigned long pfn) { if (WARN_ON_ONCE(!pfnp)) return VM_FAULT_SIGBUS; @@ -1834,7 +1834,7 @@ static vm_fault_t dax_fault_cow_page(struct vm_fault *vmf, * @pmd: distinguish whether it is a pmd fault */ static vm_fault_t dax_fault_iter(struct vm_fault *vmf, - const struct iomap_iter *iter, pfn_t *pfnp, + const struct iomap_iter *iter, unsigned long *pfnp, struct xa_state *xas, void **entry, bool pmd) { const struct iomap *iomap = &iter->iomap; @@ -1845,7 +1845,7 @@ static vm_fault_t dax_fault_iter(struct vm_fault *vmf, unsigned long entry_flags = pmd ? DAX_PMD : 0; struct folio *folio; int ret, err = 0; - pfn_t pfn; + unsigned long pfn; void *kaddr; if (!pmd && vmf->cow_page) @@ -1882,16 +1882,15 @@ static vm_fault_t dax_fault_iter(struct vm_fault *vmf, folio_ref_inc(folio); if (pmd) - ret = vmf_insert_folio_pmd(vmf, pfn_folio(pfn_t_to_pfn(pfn)), - write); + ret = vmf_insert_folio_pmd(vmf, pfn_folio(pfn), write); else - ret = vmf_insert_page_mkwrite(vmf, pfn_t_to_page(pfn), write); + ret = vmf_insert_page_mkwrite(vmf, pfn_to_page(pfn), write); folio_put(folio); return ret; } -static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, +static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, unsigned long *pfnp, int *iomap_errp, const struct iomap_ops *ops) { struct address_space *mapping = vmf->vma->vm_file->f_mapping; @@ -2001,7 +2000,7 @@ static bool dax_fault_check_fallback(struct vm_fault *vmf, struct xa_state *xas, return false; } -static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, +static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, unsigned long *pfnp, const struct iomap_ops *ops) { struct address_space *mapping = vmf->vma->vm_file->f_mapping; @@ -2080,7 +2079,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, return ret; } #else -static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, +static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, unsigned long *pfnp, const struct iomap_ops *ops) { return VM_FAULT_FALLBACK; @@ -2101,7 +2100,8 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, * successfully. */ vm_fault_t dax_iomap_fault(struct vm_fault *vmf, unsigned int order, - pfn_t *pfnp, int *iomap_errp, const struct iomap_ops *ops) + unsigned long *pfnp, int *iomap_errp, + const struct iomap_ops *ops) { if (order == 0) return dax_iomap_pte_fault(vmf, pfnp, iomap_errp, ops); @@ -2121,8 +2121,8 @@ EXPORT_SYMBOL_GPL(dax_iomap_fault); * This function inserts a writeable PTE or PMD entry into the page tables * for an mmaped DAX file. It also marks the page cache entry as dirty. */ -static vm_fault_t -dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) +static vm_fault_t dax_insert_pfn_mkwrite(struct vm_fault *vmf, + unsigned long pfn, unsigned int order) { struct address_space *mapping = vmf->vma->vm_file->f_mapping; XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, order); @@ -2144,7 +2144,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) xas_set_mark(&xas, PAGECACHE_TAG_DIRTY); dax_lock_entry(&xas, entry); xas_unlock_irq(&xas); - folio = pfn_folio(pfn_t_to_pfn(pfn)); + folio = pfn_folio(pfn); folio_ref_inc(folio); if (order == 0) ret = vmf_insert_page_mkwrite(vmf, &folio->page, true); @@ -2171,7 +2171,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) * table entry. */ vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf, unsigned int order, - pfn_t pfn) + unsigned long pfn) { int err; loff_t start = ((loff_t)vmf->pgoff) << PAGE_SHIFT; diff --git a/fs/ext4/file.c b/fs/ext4/file.c index a520514..608dcbb 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -741,7 +741,7 @@ static vm_fault_t ext4_dax_huge_fault(struct vm_fault *vmf, unsigned int order) bool write = (vmf->flags & FAULT_FLAG_WRITE) && (vmf->vma->vm_flags & VM_SHARED); struct address_space *mapping = vmf->vma->vm_file->f_mapping; - pfn_t pfn; + unsigned long pfn; if (write) { sb_start_pagefault(sb); diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c index 0502bf3..ac6d4c1 100644 --- a/fs/fuse/dax.c +++ b/fs/fuse/dax.c @@ -10,7 +10,6 @@ #include #include #include -#include #include #include @@ -757,7 +756,7 @@ static vm_fault_t __fuse_dax_fault(struct vm_fault *vmf, unsigned int order, vm_fault_t ret; struct inode *inode = file_inode(vmf->vma->vm_file); struct super_block *sb = inode->i_sb; - pfn_t pfn; + unsigned long pfn; int error = 0; struct fuse_conn *fc = get_fuse_conn(inode); struct fuse_conn_dax *fcd = fc->dax; diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c index 2c7b24c..d0b6612 100644 --- a/fs/fuse/virtio_fs.c +++ b/fs/fuse/virtio_fs.c @@ -9,7 +9,6 @@ #include #include #include -#include #include #include #include @@ -1008,7 +1007,7 @@ static void virtio_fs_cleanup_vqs(struct virtio_device *vdev) */ static long virtio_fs_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, - void **kaddr, pfn_t *pfn) + void **kaddr, unsigned long *pfn) { struct virtio_fs *fs = dax_get_private(dax_dev); phys_addr_t offset = PFN_PHYS(pgoff); @@ -1017,7 +1016,7 @@ static long virtio_fs_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, if (kaddr) *kaddr = fs->window_kaddr + offset; if (pfn) - *pfn = phys_to_pfn_t(fs->window_phys_addr + offset, 0); + *pfn = fs->window_phys_addr + offset; return nr_pages > max_nr_pages ? max_nr_pages : nr_pages; } diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index f7a7d89..e80b817 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -1426,7 +1426,7 @@ xfs_dax_fault_locked( bool write_fault) { vm_fault_t ret; - pfn_t pfn; + unsigned long pfn; if (!IS_ENABLED(CONFIG_FS_DAX)) { ASSERT(0); diff --git a/include/linux/dax.h b/include/linux/dax.h index dcc9fcd..29eec75 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -26,7 +26,7 @@ struct dax_operations { * number of pages available for DAX at that pfn. */ long (*direct_access)(struct dax_device *, pgoff_t, long, - enum dax_access_mode, void **, pfn_t *); + enum dax_access_mode, void **, unsigned long *); /* zero_page_range: required operation. Zero page range */ int (*zero_page_range)(struct dax_device *, pgoff_t, size_t); /* @@ -241,7 +241,7 @@ static inline void dax_break_layout_final(struct inode *inode) bool dax_alive(struct dax_device *dax_dev); void *dax_get_private(struct dax_device *dax_dev); long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, - enum dax_access_mode mode, void **kaddr, pfn_t *pfn); + enum dax_access_mode mode, void **kaddr, unsigned long *pfn); size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i); size_t dax_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr, @@ -255,9 +255,10 @@ void dax_flush(struct dax_device *dax_dev, void *addr, size_t size); ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter, const struct iomap_ops *ops); vm_fault_t dax_iomap_fault(struct vm_fault *vmf, unsigned int order, - pfn_t *pfnp, int *errp, const struct iomap_ops *ops); + unsigned long *pfnp, int *errp, + const struct iomap_ops *ops); vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf, - unsigned int order, pfn_t pfn); + unsigned int order, unsigned long pfn); int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index); void dax_delete_mapping_range(struct address_space *mapping, loff_t start, loff_t end); diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index bcc6d7b..692e4c0 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -149,7 +149,7 @@ typedef int (*dm_busy_fn) (struct dm_target *ti); */ typedef long (*dm_dax_direct_access_fn) (struct dm_target *ti, pgoff_t pgoff, long nr_pages, enum dax_access_mode node, void **kaddr, - pfn_t *pfn); + unsigned long *pfn); typedef int (*dm_dax_zero_page_range_fn)(struct dm_target *ti, pgoff_t pgoff, size_t nr_pages); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f427053..4b28d9e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -37,8 +37,10 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, unsigned long cp_flags); -vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); -vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); +vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, unsigned long pfn, + bool write); +vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, unsigned long pfn, + bool write); vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio, bool write); vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio, diff --git a/include/linux/mm.h b/include/linux/mm.h index 19950c2..f3b817e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3622,9 +3622,9 @@ vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr, vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn, pgprot_t pgprot); vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr, - pfn_t pfn); + unsigned long pfn); vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma, - unsigned long addr, pfn_t pfn); + unsigned long addr, unsigned long pfn); int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len); static inline vm_fault_t vmf_insert_page(struct vm_area_struct *vma, diff --git a/include/linux/pfn.h b/include/linux/pfn.h index 14bc053..b90ca0b 100644 --- a/include/linux/pfn.h +++ b/include/linux/pfn.h @@ -4,15 +4,6 @@ #ifndef __ASSEMBLY__ #include - -/* - * pfn_t: encapsulates a page-frame number that is optionally backed - * by memmap (struct page). Whether a pfn_t has a 'struct page' - * backing is indicated by flags in the high bits of the value. - */ -typedef struct { - u64 val; -} pfn_t; #endif #define PFN_ALIGN(x) (((unsigned long)(x) + (PAGE_SIZE - 1)) & PAGE_MASK) diff --git a/include/linux/pfn_t.h b/include/linux/pfn_t.h deleted file mode 100644 index be8c174..0000000 --- a/include/linux/pfn_t.h +++ /dev/null @@ -1,85 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_PFN_T_H_ -#define _LINUX_PFN_T_H_ -#include - -/* - * PFN_FLAGS_MASK - mask of all the possible valid pfn_t flags - * PFN_DEV - pfn is not covered by system memmap by default - */ -#define PFN_FLAGS_MASK (((u64) (~PAGE_MASK)) << (BITS_PER_LONG_LONG - PAGE_SHIFT)) - -#define PFN_FLAGS_TRACE { } - -static inline pfn_t __pfn_to_pfn_t(unsigned long pfn, u64 flags) -{ - pfn_t pfn_t = { .val = pfn | (flags & PFN_FLAGS_MASK), }; - - return pfn_t; -} - -/* a default pfn to pfn_t conversion assumes that @pfn is pfn_valid() */ -static inline pfn_t pfn_to_pfn_t(unsigned long pfn) -{ - return __pfn_to_pfn_t(pfn, 0); -} - -static inline pfn_t phys_to_pfn_t(phys_addr_t addr, u64 flags) -{ - return __pfn_to_pfn_t(addr >> PAGE_SHIFT, flags); -} - -static inline bool pfn_t_has_page(pfn_t pfn) -{ - return true; -} - -static inline unsigned long pfn_t_to_pfn(pfn_t pfn) -{ - return pfn.val & ~PFN_FLAGS_MASK; -} - -static inline struct page *pfn_t_to_page(pfn_t pfn) -{ - if (pfn_t_has_page(pfn)) - return pfn_to_page(pfn_t_to_pfn(pfn)); - return NULL; -} - -static inline phys_addr_t pfn_t_to_phys(pfn_t pfn) -{ - return PFN_PHYS(pfn_t_to_pfn(pfn)); -} - -static inline pfn_t page_to_pfn_t(struct page *page) -{ - return pfn_to_pfn_t(page_to_pfn(page)); -} - -static inline int pfn_t_valid(pfn_t pfn) -{ - return pfn_valid(pfn_t_to_pfn(pfn)); -} - -#ifdef CONFIG_MMU -static inline pte_t pfn_t_pte(pfn_t pfn, pgprot_t pgprot) -{ - return pfn_pte(pfn_t_to_pfn(pfn), pgprot); -} -#endif - -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -static inline pmd_t pfn_t_pmd(pfn_t pfn, pgprot_t pgprot) -{ - return pfn_pmd(pfn_t_to_pfn(pfn), pgprot); -} - -#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static inline pud_t pfn_t_pud(pfn_t pfn, pgprot_t pgprot) -{ - return pfn_pud(pfn_t_to_pfn(pfn), pgprot); -} -#endif -#endif - -#endif /* _LINUX_PFN_T_H_ */ diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 1c377de..e57bfb6 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1503,7 +1503,7 @@ static inline int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, * by vmf_insert_pfn(). */ static inline void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, - pfn_t pfn) + unsigned long pfn) { } @@ -1539,7 +1539,7 @@ extern int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, unsigned long pfn, unsigned long addr, unsigned long size); extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, - pfn_t pfn); + unsigned long pfn); extern int track_pfn_copy(struct vm_area_struct *vma); extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, unsigned long size, bool mm_wr_locked); diff --git a/include/trace/events/fs_dax.h b/include/trace/events/fs_dax.h index 86fe6ae..1af7e2e 100644 --- a/include/trace/events/fs_dax.h +++ b/include/trace/events/fs_dax.h @@ -104,7 +104,7 @@ DEFINE_PMD_LOAD_HOLE_EVENT(dax_pmd_load_hole_fallback); DECLARE_EVENT_CLASS(dax_pmd_insert_mapping_class, TP_PROTO(struct inode *inode, struct vm_fault *vmf, - long length, pfn_t pfn, void *radix_entry), + long length, unsigned long pfn, void *radix_entry), TP_ARGS(inode, vmf, length, pfn, radix_entry), TP_STRUCT__entry( __field(unsigned long, ino) @@ -123,11 +123,11 @@ DECLARE_EVENT_CLASS(dax_pmd_insert_mapping_class, __entry->address = vmf->address; __entry->write = vmf->flags & FAULT_FLAG_WRITE; __entry->length = length; - __entry->pfn_val = pfn.val; + __entry->pfn_val = pfn; __entry->radix_entry = radix_entry; ), TP_printk("dev %d:%d ino %#lx %s %s address %#lx length %#lx " - "pfn %#llx %s radix_entry %#lx", + "pfn %#llx radix_entry %#lx", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino, @@ -135,9 +135,7 @@ DECLARE_EVENT_CLASS(dax_pmd_insert_mapping_class, __entry->write ? "write" : "read", __entry->address, __entry->length, - __entry->pfn_val & ~PFN_FLAGS_MASK, - __print_flags_u64(__entry->pfn_val & PFN_FLAGS_MASK, "|", - PFN_FLAGS_TRACE), + __entry->pfn_val, (unsigned long)__entry->radix_entry ) ) @@ -145,7 +143,7 @@ DECLARE_EVENT_CLASS(dax_pmd_insert_mapping_class, #define DEFINE_PMD_INSERT_MAPPING_EVENT(name) \ DEFINE_EVENT(dax_pmd_insert_mapping_class, name, \ TP_PROTO(struct inode *inode, struct vm_fault *vmf, \ - long length, pfn_t pfn, void *radix_entry), \ + long length, unsigned long pfn, void *radix_entry), \ TP_ARGS(inode, vmf, length, pfn, radix_entry)) DEFINE_PMD_INSERT_MAPPING_EVENT(dax_pmd_insert_mapping); diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index cf5ff92..a0e5d01 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include #include diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1962b8e..32632dc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include #include @@ -1376,7 +1375,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) } static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write, + pmd_t *pmd, unsigned long pfn, pgprot_t prot, bool write, pgtable_t pgtable) { struct mm_struct *mm = vma->vm_mm; @@ -1386,7 +1385,7 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, if (!pmd_none(*pmd)) { if (write) { - if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) { + if (pmd_pfn(*pmd) != pfn) { WARN_ON_ONCE(!is_huge_zero_pmd(*pmd)); return -EEXIST; } @@ -1399,7 +1398,7 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, return -EEXIST; } - entry = pmd_mkhuge(pfn_t_pmd(pfn, prot)); + entry = pmd_mkhuge(pfn_pmd(pfn, prot)); entry = pmd_mkspecial(entry); if (write) { entry = pmd_mkyoung(pmd_mkdirty(entry)); @@ -1426,7 +1425,8 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, * * Return: vm_fault_t value. */ -vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) +vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, unsigned long pfn, + bool write) { unsigned long addr = vmf->address & PMD_MASK; struct vm_area_struct *vma = vmf->vma; @@ -1493,9 +1493,8 @@ vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio, folio_add_file_rmap_pmd(folio, &folio->page, vma); add_mm_counter(mm, mm_counter_file(folio), HPAGE_PMD_NR); } - error = insert_pfn_pmd(vma, addr, vmf->pmd, - pfn_to_pfn_t(folio_pfn(folio)), vma->vm_page_prot, - write, pgtable); + error = insert_pfn_pmd(vma, addr, vmf->pmd, folio_pfn(folio), + vma->vm_page_prot, write, pgtable); spin_unlock(ptl); if (error && pgtable) pte_free(mm, pgtable); @@ -1513,7 +1512,7 @@ static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) } static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, pfn_t pfn, bool write) + pud_t *pud, unsigned long pfn, bool write) { struct mm_struct *mm = vma->vm_mm; pgprot_t prot = vma->vm_page_prot; @@ -1521,7 +1520,7 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, if (!pud_none(*pud)) { if (write) { - if (WARN_ON_ONCE(pud_pfn(*pud) != pfn_t_to_pfn(pfn))) + if (WARN_ON_ONCE(pud_pfn(*pud) != pfn)) return; entry = pud_mkyoung(*pud); entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma); @@ -1531,7 +1530,7 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, return; } - entry = pud_mkhuge(pfn_t_pud(pfn, prot)); + entry = pud_mkhuge(pfn_pud(pfn, prot)); entry = pud_mkspecial(entry); if (write) { entry = pud_mkyoung(pud_mkdirty(entry)); @@ -1551,7 +1550,8 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, * * Return: vm_fault_t value. */ -vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) +vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, unsigned long pfn, + bool write) { unsigned long addr = vmf->address & PUD_MASK; struct vm_area_struct *vma = vmf->vma; @@ -1616,8 +1616,7 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio, folio_add_file_rmap_pud(folio, &folio->page, vma); add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR); } - insert_pfn_pud(vma, addr, vmf->pud, pfn_to_pfn_t(folio_pfn(folio)), - write); + insert_pfn_pud(vma, addr, vmf->pud, folio_pfn(folio), write); spin_unlock(ptl); return VM_FAULT_NOPAGE; diff --git a/mm/memory.c b/mm/memory.c index 296ef2c..b10d999 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -57,7 +57,6 @@ #include #include #include -#include #include #include #include @@ -2406,7 +2405,7 @@ int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages, EXPORT_SYMBOL(vm_map_pages_zero); static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, - pfn_t pfn, pgprot_t prot, bool mkwrite) + unsigned long pfn, pgprot_t prot, bool mkwrite) { struct mm_struct *mm = vma->vm_mm; pte_t *pte, entry; @@ -2428,7 +2427,7 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, * allocation and mapping invalidation so just skip the * update. */ - if (pte_pfn(entry) != pfn_t_to_pfn(pfn)) { + if (pte_pfn(entry) != pfn) { WARN_ON_ONCE(!is_zero_pfn(pte_pfn(entry))); goto out_unlock; } @@ -2441,7 +2440,7 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, } /* Ok, finally just insert the thing.. */ - entry = pte_mkspecial(pfn_t_pte(pfn, prot)); + entry = pte_mkspecial(pfn_pte(pfn, prot)); if (mkwrite) { entry = pte_mkyoung(entry); @@ -2510,10 +2509,9 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, if (!pfn_modify_allowed(pfn, pgprot)) return VM_FAULT_SIGBUS; - track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, 0)); + track_pfn_insert(vma, &pgprot, pfn); - return insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, 0), pgprot, - false); + return insert_pfn(vma, addr, pfn, pgprot, false); } EXPORT_SYMBOL(vmf_insert_pfn_prot); @@ -2544,21 +2542,22 @@ vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr, } EXPORT_SYMBOL(vmf_insert_pfn); -static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn, bool mkwrite) +static bool vm_mixed_ok(struct vm_area_struct *vma, unsigned long pfn, + bool mkwrite) { - if (unlikely(is_zero_pfn(pfn_t_to_pfn(pfn))) && + if (unlikely(is_zero_pfn(pfn)) && (mkwrite || !vm_mixed_zeropage_allowed(vma))) return false; /* these checks mirror the abort conditions in vm_normal_page */ if (vma->vm_flags & VM_MIXEDMAP) return true; - if (is_zero_pfn(pfn_t_to_pfn(pfn))) + if (is_zero_pfn(pfn)) return true; return false; } static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, - unsigned long addr, pfn_t pfn, bool mkwrite) + unsigned long addr, unsigned long pfn, bool mkwrite) { pgprot_t pgprot = vma->vm_page_prot; int err; @@ -2571,7 +2570,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, track_pfn_insert(vma, &pgprot, pfn); - if (!pfn_modify_allowed(pfn_t_to_pfn(pfn), pgprot)) + if (!pfn_modify_allowed(pfn, pgprot)) return VM_FAULT_SIGBUS; /* @@ -2581,7 +2580,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, * than insert_pfn). If a zero_pfn were inserted into a VM_MIXEDMAP * without pte special, it would there be refcounted as a normal page. */ - if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pfn_t_valid(pfn)) { + if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pfn_valid(pfn)) { struct page *page; /* @@ -2589,7 +2588,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, * regardless of whether the caller specified flags that * result in pfn_t_has_page() == false. */ - page = pfn_to_page(pfn_t_to_pfn(pfn)); + page = pfn_to_page(pfn); err = insert_page(vma, addr, page, pgprot, mkwrite); } else { return insert_pfn(vma, addr, pfn, pgprot, mkwrite); @@ -2624,7 +2623,7 @@ vm_fault_t vmf_insert_page_mkwrite(struct vm_fault *vmf, struct page *page, EXPORT_SYMBOL_GPL(vmf_insert_page_mkwrite); vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr, - pfn_t pfn) + unsigned long pfn) { return __vm_insert_mixed(vma, addr, pfn, false); } @@ -2636,7 +2635,7 @@ EXPORT_SYMBOL(vmf_insert_mixed); * the same entry was actually inserted. */ vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma, - unsigned long addr, pfn_t pfn) + unsigned long addr, unsigned long pfn) { return __vm_insert_mixed(vma, addr, pfn, true); } diff --git a/mm/memremap.c b/mm/memremap.c index 532a52a..d875534 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -5,7 +5,6 @@ #include #include #include -#include #include #include #include diff --git a/mm/migrate.c b/mm/migrate.c index 365c6da..e3c3362 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -35,7 +35,6 @@ #include #include #include -#include #include #include #include diff --git a/tools/testing/nvdimm/pmem-dax.c b/tools/testing/nvdimm/pmem-dax.c index c1ec099..05e763a 100644 --- a/tools/testing/nvdimm/pmem-dax.c +++ b/tools/testing/nvdimm/pmem-dax.c @@ -10,7 +10,7 @@ long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, long nr_pages, enum dax_access_mode mode, void **kaddr, - pfn_t *pfn) + unsigned long *pfn) { resource_size_t offset = PFN_PHYS(pgoff) + pmem->data_offset; @@ -29,7 +29,7 @@ long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, *kaddr = pmem->virt_addr + offset; page = vmalloc_to_page(pmem->virt_addr + offset); if (pfn) - *pfn = page_to_pfn_t(page); + *pfn = page_to_pfn(page); pr_debug_ratelimited("%s: pmem: %p pgoff: %#lx pfn: %#lx\n", __func__, pmem, pgoff, page_to_pfn(page)); @@ -39,7 +39,7 @@ long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, if (kaddr) *kaddr = pmem->virt_addr + offset; if (pfn) - *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); + *pfn = PHYS_PFN(pmem->phys_addr + offset); /* * If badblocks are present, limit known good range to the diff --git a/tools/testing/nvdimm/test/iomap.c b/tools/testing/nvdimm/test/iomap.c index ddceb04..f7e7bfe 100644 --- a/tools/testing/nvdimm/test/iomap.c +++ b/tools/testing/nvdimm/test/iomap.c @@ -8,7 +8,6 @@ #include #include #include -#include #include #include #include @@ -135,12 +134,6 @@ void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) } EXPORT_SYMBOL_GPL(__wrap_devm_memremap_pages); -pfn_t __wrap_phys_to_pfn_t(phys_addr_t addr, unsigned long flags) -{ - return phys_to_pfn_t(addr, flags); -} -EXPORT_SYMBOL(__wrap_phys_to_pfn_t); - void *__wrap_memremap(resource_size_t offset, size_t size, unsigned long flags) { From patchwork Wed Feb 19 05:04:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13981501 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2062.outbound.protection.outlook.com [40.107.237.62]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1D4D1C726D; Wed, 19 Feb 2025 05:06:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.62 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941581; cv=fail; b=aNY22rg7qi9q2uqRvck4W1dbveZc3RDThqqp1MxQQRAG+LABK0Mnah+Q59mQdalRuokrh2PBvBHJOfvkArEDvCy84NIXxBXHFBGVWHoAEPQJPRPrp6HnFrBhvWgVnTiivv37ejSg1gNBi3vHCf5r3ah18HNK9wJyVuSj2G+51OY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739941581; c=relaxed/simple; bh=znjNBWkbWK7NwDClisboff0CeUV4mfjAzZhevNoG7ZQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=mZsNkrLSHzl7lL7btHzMUgAUO/Wr3iJIWlxR7LisCmf4VqAImKQKKnOUiGLC0KEph14oAtZf+6AKKqbk/qYxcPP2u8QMrqEJGJWFRPsLYbxvHt2HUu9CJ8iUb+3CMKaSgG2whbWPrEN9kmXwVgaVVk2BTFY/LqCd8BU1NDKQ/7A= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=ZCh7xZDl; arc=fail smtp.client-ip=40.107.237.62 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="ZCh7xZDl" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=MI9kKLfGGVslr0PS/ovmN6XBR7l1rOOVlbBi42qgILVblg0qNx3jnAwF2OZVJiynXjWalYuwuWxVZH6ci3KOVUSeguqYw1+4Y4bzvPodTubdiDGv7rpqWdwwAKQP0Vfok3gq68n/hqLEx1DufZGKpE6Lc5IRXZrHA2Qb9kqJdOrBTYks+yezaj0sSYULRCwNW8pux5l4dkp5d5LaCuXNDRLY5+6Sikz6z5KpKRJ2trQONgVwJBm75UzI21AoCNw/GdeQIzvrazju1vh7Xpp7Ixh5fIdSMx3GH8ComrKS+ezfgDA9V2RDOnZb399Dro3QN2L5NoP8pz16tEqKu2/vcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BMS9zyNRN1MHtk2QAbEU3QXjdKpELOm6Q6Jc22PCJfc=; b=uMLvFIJ0OaMjUOCjnyw6AmVA6qR6sy4rWIjMypEY5BpGo34pbFXk3khz5KIABYuk/3UKjba4ztJtsW8ikNdFFL9TzqgmrdMc2joA4WwTL6u4ZopX/X5qOp8K5fFHJeJPPZ1dO9ijzNpconAxF378HgCresPGluf6c9uAq+A2Tj8UB8fsx6o8/G1+5pTqUiYsThToo2kbTs70Lc5wFz7FHlrUgUdpliBPdxlBjbYiLLjl5qktpz+serRjpxRj7CXF3ntz2YG27vSX3ckMMq2fJZktcJ1pTcZkbFOGMBmQIkzxNj7v4e0IMQA12fJZDnnRB02jzpCDYWWF8xe9CMnZHQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BMS9zyNRN1MHtk2QAbEU3QXjdKpELOm6Q6Jc22PCJfc=; b=ZCh7xZDl8Fh+hosS925IFUKa7Scyw+HtUFZsNWN/5PQiEPSC1M8iyjuWxBrV6c0W1PxCYWRRM50hAO9iKNflNmF3Mf9+CwzQTg4UNFiX7+K9qKL+BtWBhjoEGj+sFwo2vHx0U8LXR1m6pfxsCGLlJLuxyLQsycbL9diksUOIAd178SxmR3il/V9XKdnhDPHHl/nOfKjbnbsK/EXDiyBiHTYVgNnI/kBjoL+B+8q0rBNqi/4MqA3Vn8mqvU/NQxanZnkFQjmjRwbGDj6doYdk7KVszUIyFaZGx3RhFZ0U/yPjlK+Ol6IB5v2aCJwavqJPWWrD7DaiDXeSaXs5kheGuA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SA0PR12MB4480.namprd12.prod.outlook.com (2603:10b6:806:99::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8445.19; Wed, 19 Feb 2025 05:06:17 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%7]) with mapi id 15.20.8445.017; Wed, 19 Feb 2025 05:06:16 +0000 From: Alistair Popple To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: Alistair Popple , gerald.schaefer@linux.ibm.com, dan.j.williams@intel.com, jgg@ziepe.ca, willy@infradead.org, david@redhat.com, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, zhang.lyra@gmail.com, debug@rivosinc.com, bjorn@kernel.org, balbirs@nvidia.com Subject: [PATCH RFC v2 12/12] mm/memremap: Remove unused devmap_managed_key Date: Wed, 19 Feb 2025 16:04:56 +1100 Message-ID: <7efe718b19363bfc1ccd75c558ba9e5fcd94fa0c.1739941374.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0077.ausprd01.prod.outlook.com (2603:10c6:10:3::17) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SA0PR12MB4480:EE_ X-MS-Office365-Filtering-Correlation-Id: 9905a869-c10b-4c02-7cfb-08dd50a323ab X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|366016|1800799024; X-Microsoft-Antispam-Message-Info: 2NUe87OFXfTmIA6AuTEUL98knjV6gxamHmm6u4Bh7BmfNcdK5megCdOp8u2KP1bH1jkP8naztOntBNQDyKU5AwRHwB9sbdHGAu415/s8ZOWwpQdKd2il1111bWokYB8dpCbEPTJqbu0EvGYysIuU0zBLqXGeES5Iif2ZoODBjjT+iBi+1xLFPvq4etEZxhhU4WgEIJ4QNaz5zt1Mhp7CMPKhrm7rPFMc26FQXgjVhtA9ZIAODzdR5jJDkRLb1Io5jYekBbh1YMR6cfrhWrQWHMIv/pHXY7cH3YAizqVtLoxo6pPPgTfoZE7Ye5ijGjaXlHPnqj0gxvjUNsw4XguRsX8N4biPvpYZiHOMFcqdPibymoWwkZAjw5uVtRjbqbWsx/CIYdxdUf8zBhXId1Or9ZX7YLT4tLneuSa3nDI7fwvgzXuALRiRjZro3sTXD3xeFAfnfJz72X3wg2lzujBBBT+FPCffOWZEhapbz1hCPT+UdAP1xF859p+3FK64a4dd1PctgB1IqO9T4FHTrgVt/vdhxAAQBsXiMOfQj9g1uWVip++HW9+zlWw40V/S2fUl6/7FMo4evWcP8Udffw70JZ/lNTjAXcCB53FZCGblLAdqTqsS5oPjZqIlN9oydjbT+z9AweP57ePGVl3i2E6FCsmZ6B1G0/QOZBjYeniaAXUT7jsch4uPvl1J47haU3ECb4IKCZTmnxpeL4EDgKYgxQBbp83nrfNFrB3vidIS71oJGC6XoIYYNxUBTigU3US8QRIlLfy/Nh9TwhyqJ59x9bWKw/fzodQQHn8AXA2Wacjy4SZig6hbgizzB64JZzqOcAMWkrVj4cahgqHG3C8VoLAIv8r+7n7p8QYeq5E1aamcKychEmxg8Usj64EeswHQvvfP12ScSTBbysknBhBUFqUhPg2HLMEICJ2ZDjI62JgHH20FNxD2OO6TqL4wO3cgg9qbxtQax8eRV5YG43AoWaWTq0CkSs+GZu8jhS32aPkBOpSurgv2lmnebhlhVogNgttogE5lYhq5OYnmctyOFaIgQDWsVZrSKH8KyML+4aJxr+c6BXDsWaj+mHE2+5gZDDz8lvKXbM4jO+TYQ+4jW4LT5Xtd+OsDj3HTZH87koK91MhajzVwvAjty5Rlkab/HTOPV/cjHVDJhNdZ9DWtvxbf8Ss0PhcBD+sP+UbI00RBt94EHbPWwSFzloOL25aDb1+SwjqO0dtzpKC3c+dcAnA4NOeuDQ6tKGgjOpJO2UztcYOJZGL3Y0HHI42EzsuyQKbwelidWZg2h9bTZtszR3o8UbyfUBIRVdF1VTdduDY7jYP3uulwIQZqZE79tSzSZl0U0jVTYjXlNnYPyTJTyBAFLtlv5QVBXcPE/JZVDSPgXd4/lkGOANBH8q1bfqUV X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(366016)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 622uA9HC/etTd+OyrqmXB0khxBjXb8gN4aTH/ViCKcdCyTtNrLS+v2v4sLF4nX7A0o6Piqt2gPvYlmXY7OpXombc9+j3KE5CbZ2wB31hpmfiHOYahTJkyVDyrc6/d+I10nOBkg/1gagtkiGmI9jYZTVJ0jVfQxx6JhZ/9lGma/8vH0wl6CtHwkjXg5vF0T6siE5Umpm78ixt2Vs8ELd+gW0UWwid8jDcU3qMw858v3VrzJ2johi23HqUcZdd8CDA1PpA4uGemi2Za4eV9d6PcmI7Gnb4T/xqLCLIigMc70mCCg1zfvSFdgwxhmhGAFBex9d7tIrLxaVZe1hqlt9JFf4/161wbv0hBhuYz6GLf1C4V4+0pWd39vye0Vq8TrZ56MUSanFJTijfQM1wgM6o+FTmj29eQRTMKtvcpU0G4gsIitX3OU0d1uGeF3lUt87ig87ijwY6KU5qhl6QUBH+bqLIClaYCgukxBZN3XouS2Y7heiCyh0J4BCCxJq6yjqbpZ7vATO/GfZqcuO1YrqEeuI8GJ4Z3i9+PHbhfZhtOvrHmLcBFfvjWK/qXDt63L39S8L3HhC2EFbpNkml/UTIhOjn0L9VgcD7L+YKNirrrz6RaI6u2A/bSA2q1Dtif/W8r5ff7lbn39nbEA7yPYKRSLxzwVQXsxlAyf+ePqhTk4xrJXBZPE/v/T2utfPv2IcOoEYoRzKqCiDTVCrsKvT+vnHKtiJy4TIOKlbJGUUxcY2+B/+Ac2BdA/gp73aHbR9GtwSeO0das1/RBqYcIKF6kYgx7655q8Q9gGkrDONh/nE9gV0FqJqhjj7BtyHfpxEWMIZ367TRy0+Luj1Jus8bm+WSQiQpKwndTCotcHj5qx5YnTJX9iLI2LSwmBfTaUFP9ty+FnlItiKzU15dbhShOHTpPNS71iVXvl5T9+P4WRzREjrhSGj5oF1VLE3rv6DaSbMfNcmvlMXcvFUYqbB9/Bw33D2ouAkRqCLzTCSAUL6LnwEut2bGFKyKQYqOGyRhf338E0+YOjHv0wRQvpXUmfEk7NAVET5NM5/mbGLLEycTmqwkNXszj2svxnsi8jV8bqvyNDN4bD7oENHFKAnwSoHLsl6w2ovN4kzK4l/uLmSVuVgDXkPRQVP6oLBOQKZn2Eg7LEho2F9IeJjxDzQ/ar9Qm2UYrxnVp66nK1QXbTE+QNqLifOffkkf57scKW05qHY9KHEYU6IHONYuY4s4PjhvcnT5eLv85pTY1XQI1d4og0aGihllXPPglX44HCCfNz8VmKXCidsK1Hh3TeLX1U8e3BoqeaFzg5n+T21bSJSN6xeYNJ73OLCNwDrwej/IRWgJfXkN2/8nPeqqAvCTn+0C1PRcg3dkmNqAz+mcC5FsI0qK7gO/B2xpPlW9hgQwN1g9RJaVjyzk1/zemQIWAePT/5tGv5Gu9E4rBEZ7GkcAWfx7WfmSeURbEPvqL/kngu1gRnhi2UwURpfw5W39GC6EDW2i38N5feQe3OK2foFLBhpVu/NG9pQGsqAE81jfJZ66Z4u0ntNEvjyyRgzao5Iqpxk6z5Um6kUMx52Gr0di3fKu+jY7hX0Yi/rq6Skq X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9905a869-c10b-4c02-7cfb-08dd50a323ab X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2025 05:06:16.6567 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: dUA2oxFJlDMcvd1zG0PYCs40+5G+QfDroAK9/5WaQn7yDMUiRjmfhAbkjHjt+mB++FAhJI4oH6VkjJ7KGD/N1g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4480 It's no longer used so remove it. Signed-off-by: Alistair Popple --- mm/memremap.c | 27 --------------------------- 1 file changed, 27 deletions(-) diff --git a/mm/memremap.c b/mm/memremap.c index d875534..e40672b 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -38,30 +38,6 @@ unsigned long memremap_compat_align(void) EXPORT_SYMBOL_GPL(memremap_compat_align); #endif -#ifdef CONFIG_FS_DAX -DEFINE_STATIC_KEY_FALSE(devmap_managed_key); -EXPORT_SYMBOL(devmap_managed_key); - -static void devmap_managed_enable_put(struct dev_pagemap *pgmap) -{ - if (pgmap->type == MEMORY_DEVICE_FS_DAX) - static_branch_dec(&devmap_managed_key); -} - -static void devmap_managed_enable_get(struct dev_pagemap *pgmap) -{ - if (pgmap->type == MEMORY_DEVICE_FS_DAX) - static_branch_inc(&devmap_managed_key); -} -#else -static void devmap_managed_enable_get(struct dev_pagemap *pgmap) -{ -} -static void devmap_managed_enable_put(struct dev_pagemap *pgmap) -{ -} -#endif /* CONFIG_FS_DAX */ - static void pgmap_array_delete(struct range *range) { xa_store_range(&pgmap_array, PHYS_PFN(range->start), PHYS_PFN(range->end), @@ -150,7 +126,6 @@ void memunmap_pages(struct dev_pagemap *pgmap) percpu_ref_exit(&pgmap->ref); WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n"); - devmap_managed_enable_put(pgmap); } EXPORT_SYMBOL_GPL(memunmap_pages); @@ -353,8 +328,6 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid) if (error) return ERR_PTR(error); - devmap_managed_enable_get(pgmap); - /* * Clear the pgmap nr_range as it will be incremented for each * successfully processed range. This communicates how many