From patchwork Tue Dec 17 05:13:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13911035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9985DE7716A for ; Tue, 17 Dec 2024 05:15:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5AFBD6B00A7; Tue, 17 Dec 2024 00:15:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 538EA6B00AA; Tue, 17 Dec 2024 00:15:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 22C296B00AB; Tue, 17 Dec 2024 00:15:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EC6176B00AA for ; Tue, 17 Dec 2024 00:15:43 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A61DC140336 for ; Tue, 17 Dec 2024 05:15:43 +0000 (UTC) X-FDA: 82903287642.30.ABC9A6C Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2042.outbound.protection.outlook.com [40.107.93.42]) by imf30.hostedemail.com (Postfix) with ESMTP id 30FD780008 for ; Tue, 17 Dec 2024 05:14:43 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=qVBX8cgG; spf=pass (imf30.hostedemail.com: domain of apopple@nvidia.com designates 40.107.93.42 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734412509; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+lUrkb/TRgVyHZl7P4uUCQ9MCU20RTr8ciZrvDBbrtg=; b=YJMdBUk8Z2kgwa6jltFTuak3HCTZ1NR+fJ1kkkEn6nLzoiefXUYFu10KAjY1Qsjshvd6TB EnQrLug1Ridw+RyYt5D1isOwpTDxPBcPfdDw5kNFzCJiGce5tZz3aYcsG9zTfpTpUtzSGy SOfpdX9ZKPlOUDYlBtbqhY1UWRvbLdg= ARC-Authentication-Results: i=2; imf30.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=qVBX8cgG; spf=pass (imf30.hostedemail.com: domain of apopple@nvidia.com designates 40.107.93.42 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1734412509; a=rsa-sha256; cv=pass; b=b74XNGj+3Tq4gN5H1LAHUb8VlFIksHAjnNC4uiV7AmU2M3c60/N5vMPlOAFAgFP79KbyXb uGR/ZGuv6Iahcw1WsYZ7ebWoHD0gKcZ/qEXDL4Vdbv4CU5L01OEEfDQuCyGlPfoi1I4OpW la9bpy7JckrEeV5RhlYyYXPJuYqV7+c= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=wOgiV29Sw0wmZKSzl5d5wHn3ET+Rc4fc9u2NHqRIx1VT2t6VlmXo/8B26L6UOa3Z05WcPvpnmwRsO8G5MLaRW+GJbwG9/JU5eadQc25Zh+i0hmnSDxXiae7y7jg5G5lDf66oHS2i5Bh+QzEg2zjS/VzWgnMNaqHFfhz+uT56Aqm/4S5z7IsmSIpJxIRAZLEz0YbTXH8uKrDuJTd/4xbz1jr/d400XnRHLVfm61ZVX1xTrL/oLJEq9t2LMP359pCpP4RmSRxOfC2uW0pzQEeuUIKKNnTj0zPZe6QG0aqC+gWufOBkP+pcR73uCydfllSsJcg6ZnUtY2VZm4Rcl+XHcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+lUrkb/TRgVyHZl7P4uUCQ9MCU20RTr8ciZrvDBbrtg=; b=Y6Uxaua1xl46scN+m4qE2n+TuPO/32iIB12/XnQu8Df4jm1N80/uYS/w+O22VE8legRNNVn2gbL3HPPnE5YFFhocEmZA2hdWnqzPy6BANOGzmlpirT0D8Lz8kogguwmMp+OHQbvD9GBFzkMMbL0fXIpX2zBtMFmmjDoLSZWNCNWbt1iSMaSDuFuuFntlxIgG7tmIgUhk8JO5SzlE6ZNjY86b9l1Qo33DjGl11q8tUHin/lZ15qzQl4vs4c0RywU4cCgmgEFD/xRRRpj3MiIwuKtZicZcO4lNJnV7Or6ZsQsEtVgTcSdQsGFRTermSHMGpXpDk27+4IxI7v34XcgzGw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+lUrkb/TRgVyHZl7P4uUCQ9MCU20RTr8ciZrvDBbrtg=; b=qVBX8cgGOE+oVBkP5d9HgHUatdaGccMFWrgR0EeStABA+Eg+FTyjUH1804KlKFZ6f99o8LE0Q07mmrgol01or+EtjDBchOI+DNgPsv1ZkTfTRcKhOxLzy8vLJ3M3qC15OOutPfkdwLQdI0vxm/wtFM0E0is7webxdfufd4rZOIqyvvFeVrujGiK2i0hO2FdkYOjzPBeh3bM98O8/tabqViGjI/CtXOt1rYw+oR/ziD2r9yRIIGpyYdmwKavt9Uc+K9CibHZRWcrtx2Gwct3CPIf61UIqd6OfG113vHSxSJDAaYTG9+aXA2WxBZJxRCYZyx9R6ClChAEF7q0iYLnNeQ== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by DM6PR12MB4388.namprd12.prod.outlook.com (2603:10b6:5:2a9::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8251.22; Tue, 17 Dec 2024 05:15:37 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%4]) with mapi id 15.20.8251.015; Tue, 17 Dec 2024 05:15:37 +0000 From: Alistair Popple To: akpm@linux-foundation.org, dan.j.williams@intel.com, linux-mm@kvack.org Cc: Alistair Popple , lina@asahilina.net, zhang.lyra@gmail.com, gerald.schaefer@linux.ibm.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca, catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com Subject: [PATCH v4 23/25] mm: Remove pXX_devmap callers Date: Tue, 17 Dec 2024 16:13:06 +1100 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: X-ClientProxiedBy: SYCPR01CA0022.ausprd01.prod.outlook.com (2603:10c6:10:31::34) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|DM6PR12MB4388:EE_ X-MS-Office365-Filtering-Correlation-Id: 02c43bd7-0526-47df-d9eb-08dd1e59d79f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|1800799024|366016; X-Microsoft-Antispam-Message-Info: MrMjUtuoje6qTuYQuWP20/s7bvCVCk5XeR3tcAcGmunEPacU/spcyST//b7A9v1wTygP7Jc2vfWOioD47rkjNYgQCK5x/Tst0YkqKuwNpAU1zNtt93d3BB8/i6gSJEbZMI2f+i9k+O34r/YJONsE6HdTdivjH2UwcfiEaU4bYvO2q5smV2sj0v76TmI4GTm5HCh73wwhsKp8n5yYIPjkyWZk8Uj2XrEpKcGpcjSACfqY2W+jMsb8tAM8PT1a6xftwnbSwrSork0TvpVBj/CWP7y9uNXzqE9Q57YzNs6zzIuAeVGRAyD0pgwRHDckxJHGRCtKlIDNis8jIhnhlt6/hClWXqLzZneXMO8J6DI5fliBHLwZOWlmTNSh5ZbUCsjJCeE81MqcdRit/z/nQp78QsvkKSE1OpADoYAqHWHdFvdcBo+D90UfEipaJXbPwdTIlkGA9DvKMNW0EwvhauTnopiRx2p3iWy6EUuRzfsZ7G8jSDuAMneImYz0IzruvcjvpKyPqTAF2/fBM9jphD9kT3oHAgdpKOSulb4/FeBRdAWEqSVXByKBZTW97BxNtdbKeJHuWbVZaERI+Sv5MGcO3J0htA7vP89t53TO3sGGUQ0p9MEWLieZWjoRR7wiYGta5tmXLcl+a7kaBAXZ/tg0k+RfkEdFNZ29FTejxc0l+/ekpv0ChA8sYQc+6p5XzQR2L5glUkoJSTCJaajuj2aB4CUp/SQHpTrYvBASahdfQmAvxKjdbxS6Fe0kN47qK0u1kaVuBJR4cywiR8dogEyG6a9gYvkzhmmYylxnqJt7PCLmyAJs7uN3rqWKBB3+bGQxeOw7oL2drzysoJ/Z9TWGg7okvSxWkQJQSb3Pwh/07ytdI22OtNP+G3VyNnFEYRXV3yKW+yAG8dSV4QLf9C+dvWe/T4McPJXWOXTr8/43+EAABMTglKwpFByiRGi2GFGUcWbzw/rAcKZ91hEg40xZvwchgrOHJwMF2EdWKVg429j8pkhd4kYVzTHf46iuob5LEjVNeji0orYh5suLMknfWARaj4Y/eA5QcoMBd6GsmhlmpNTgv2TfKbhDLDIIGtQPeuYVe4ljMXRHxd4kDKwX4cC5G98N8lVPHvyMlj+E0qC8Kc9H2GrkN+JpwM1doRw1U3TQC8gR+Gk8eBqDBu9CSru8BD76SGWM8WTj3fDbQLPW5D2mAUwij9zVqq4stD6GnAmNdNTcj+Zcm1tEYShta3W5Clk9+rqgg3yGzMwGgRXLRBNlLj85ADIqd6ebpzPZk+ok9v3RKsLNgZDpdXQNaY+NdpdBs6ldB3W5Xf+4itV5DuNLVKrFoH5+XRJaHviszkYtAWAGIMmvJuNx16o8n1wxVzpimuNE+1uiuxecn4xQu3h4/HiZPn/jAHtE1Ewu X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 8073jrNlD/qhv8RKfOMaYyqLk+AzHbtcIJmM2ZCAhUVOkXq3F8lndWXOazIxLZZ3JECgUesaEoDJ4MA6grUHnHBzgbAOUTVnqa3TPmkaNmyLls75New5MVLjNKFMa1Ti/6KRuMh3m3vFJhnAxvBqD1IzgsyMvBxoiA3yXsizj3We+Vhv06LOKs6DzWwsFxAy86HoCZCcaHePWN/EObw2YUf9z6dH/0XdKqR+H+2fh2Xc1XJChg20OvRrR4PGY+jmMg7x3aJhTKl0NV9bvJqiG7FAv4Goja/Flnpae7D6vzH3Qf7xaZJd+2RKaEp5VNBEOXCCdxQ1jWVvazvgrUMaRl0RYa84NP1zsLsHbGxs08nClZEfvUeOKnklje2O4U0EodQBn64j9Plg3nvgnSv/yntdE0FdqcU1NdmtiZv0HvsfbE73EZCLE+syvnazSor00BpKeyNgA3APZU3dabT7y0SPnUU5C9V9TzprJfXaOMvHD5SIsPLROFAi/tsrH0syiR6nk9vIKsq2qECZVA9f498AXKEDOmNANoW+B8FzPPVEL0KdxIogAkDQaKo1ShmHSigg0vXoBHzrY9+/Z7roUwg7CI+5n/scq/iD4jfzliZchfFj0yPaX0rCZ12D27j+rJCaOIsZvbXpQ9TNRtHKlvH7UXd0QtSX8stVhf4GKKxsN19cFwgmgUggopH21BDR+vuHUjbCyObo7G190ec1YnSYx+6Sr+Gm/1fZ1M+ivJrhS0x2jXjrzOxnx32Vl1p33DwrY/Uk+AA/TsR8k/UU4278mn2CI6iE/crz/O36PhnzRBcKg0SE1Ah0n9udxPJWKBDZwXbzanP5NiXVA5U0MJ0/0mzFHY/Bd44lB54ryIaUq5AF96KC3krrM/N6u/n2lNdkKR+m8YUz2wVEfXBAi+q6oE+a6ZaDfm0b/kRRD6aYFEsjfbm43MkaHPiDTCRSoHqP1RDRdVCbCxf7GOA2w8mEzB+fRaaLMfcZRehuHIk/L5ShUwIDsPyuXRyic2LgwUZgK0LKmF/wRaG/JCdp/ltYqsKPVXR0l3o2NSlIiE6w8xrJ+RA+K/t1I1mnzRLDdnY7IJxYzQkByUZiTJHw//B4MMhD6iTE9vqUgDpCrbRk0nlUfo0si7zULkkERp1QGqYPm+BTypQ3/g4SBtsH3ZsAx1YwLL/AMtvDbyKPV+wCujsqIThEhK12b4Gzo+rVa7fhlzU3Wm68noGlnBvcr5T2SAMTlxfMWTl6DCQ+xYI9+kNVuAM1UG7nRnhUFrATVFcaOPHpLfKzssFiANLZGJz1I3/SE+pfXQzCmQ+0ZqvVcDTAbW4HyDxN9f/RcKdB+hgUcEBuHS6GbLvMh7pYEGsYdqiOLqaw3IHSEYRjfZxsP7b2dIlSHJP7RLW0OCXEeqBkNuNZsa80tjsN41cmf9vSBazc+4jzFXMBcbZA314uhAsVUSelg4RMSIGwAn/ZmgGeIv10Zwo/wlp/Vy5Axd/Cnpyl80KE7I9jKUuC0yhCHHBYQf/rZbSNvQpPcqxMbSfx8qBUPgNKcVZwnbB6SkTDL41qg/NUd8TcaI+9T9qKeEWy7521uTcx3QLNl5tU X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 02c43bd7-0526-47df-d9eb-08dd1e59d79f X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Dec 2024 05:15:37.8683 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: J28OtVQtHR2dUZv/4vGdTQx9w7AbtNhvQf0dNnlJgrOjKZhLvsq3xCasoIxMWgM7T+NJZPO2bI8pFsgv6d8KZg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4388 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 30FD780008 X-Rspam-User: X-Stat-Signature: iek46kn5zcqwyr971o4fjoktnny8y3mk X-HE-Tag: 1734412483-724331 X-HE-Meta: U2FsdGVkX18v9WY+Ptc99/4azveteGlpy0WRpXItxGhhBzs3wiFm+aQehY/bz3Gwl6laQ0d7fYzTTeLPjt0rfiEVLW7aWQsNmNixpIACchmKL2f5zNdj4HT1DW8ZbIwPxpoRAQeh1EDG5RKZZPyKSLvWvU5CavoBGWosAU8wX+eSLKpod98fDXxxHmpyVP4EaR+2k/v76vnSi8foqh3GKu+k7CKacaCT+hJc80LxHJs/DGv0zUICm99c1e+ktQ7GQvC0q1cTFeA3Iq5hjL5TPFFMZiRJs5R9W5uj569JwVnFI7z1ctme3o50JpU/G/dHgRJ/XpsxvOzpmmlilnkH/nDCBGdn+Me+rX6V/rkgSeA8vyWhwwqUpYi3t69QLYCuRDhpH96tRffYwPlAfQ8KUw1TZuTNi9OQ9uC4VYEn519Z0XCK4W3Pw7cRTRvY+4APmXN7zwN5AAv8UWmUmtBzNVwl9XLUEdh6piJfCgyWKbR8UCGiqZepCVYCF00skCGvIaMSs9arhlh5irLBGwbYNN1jxwdXidhIga86dlUwqiPqneinWGKgxv+2LxZYw7JYA/xbvlp6ukHdVGdFd/v7U5VEpc0TbgFKLfgpfd/LxaRA9zU+Dh1Z6s+z14lMC1mJ4qnqUvIfN5Ezv+NFzs/em5OKfb4kb/b3651kTZSD+FtJU1JN6+qkcdqL66Qu8BAARFDfdUdErfc+w2FuZakVM+9Iw4sNKnfxVMNdaGXmZtzAELSB98jazv3Jn2DMOQ5VfHgdh8qzniUW7RJ9MOpfzb8EtjeRT/X7JTMhlPy9Ej0pjdg2HOewaqO0Pbexdaoa77Kgc8CfhVElD/xLGf96ZRL+wQBdLZco/qZMVKQpcZEkqO6ICvZiEUTkMfSdUejX9hT54NA41zCOZrrx6cBEHSHDDMprXZncSB+1oI2eJxb3O6/Nw4POcaR+sNqblspgXEfXlTnaNX/QVbXNMBg IiNgO7Ut R2WOWxGgL0gjJf5Nz0ruNY8bbEkGIFsw1FXrMgF/548NoNvmwq9dI4/AKD7DUVr4EEv4ASVP1REnBimNmIqo3uVWTlPfCIlOFSqBWxc+GWB/Y+ZwCYwquM9Vh4VuhFQUQ/QzoutoBW6/bkMqgL9qfCOhEG7RAB395hb85dzcChZaMkPYsTUaxes/pBZJusfa+L2Ih1uRxpcAnkYhd4qhyn4kxIEge0CY6ny/6QAjoBSCLYDn0U+CNfFslSAuYXCxdSLR/SO+Sx+bsTwEsjdmHwzQRlx86noNQJGfWt6dH7DoM29c6J+YqSwOvr8xFrOtYLLSaT0bLw1i4N0Gr48Yk6lu8dYcG6u0L2nC1GTSdTMh7y968fpCxg/lGKPK6jSBOl665v5/TR7Jt6G0WlQz5K8hP86Dd8E6AyUARJ7cpYtp4+FQI3slI1HTAy/yKLxSOxZ+qj89bcoM3qwqaoaG5PBUAtg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The devmap PTE special bit was used to detect mappings of FS DAX pages. This tracking was required to ensure the generic mm did not manipulate the page reference counts as FS DAX implemented it's own reference counting scheme. Now that FS DAX pages have their references counted the same way as normal pages this tracking is no longer needed and can be removed. Almost all existing uses of pmd_devmap() are paired with a check of pmd_trans_huge(). As pmd_trans_huge() now returns true for FS DAX pages dropping the check in these cases doesn't change anything. However care needs to be taken because pmd_trans_huge() also checks that a page is not an FS DAX page. This is dealt with either by checking !vma_is_dax() or relying on the fact that the page pointer was obtained from a page list. This is possible because zone device pages cannot appear in any page list due to sharing page->lru with page->pgmap. Signed-off-by: Alistair Popple --- arch/powerpc/mm/book3s64/hash_pgtable.c | 3 +- arch/powerpc/mm/book3s64/pgtable.c | 8 +- arch/powerpc/mm/book3s64/radix_pgtable.c | 5 +- arch/powerpc/mm/pgtable.c | 2 +- fs/dax.c | 5 +- fs/userfaultfd.c | 2 +- include/linux/huge_mm.h | 10 +- include/linux/pgtable.h | 2 +- mm/gup.c | 162 +------------------------ mm/hmm.c | 7 +- mm/huge_memory.c | 71 +---------- mm/khugepaged.c | 2 +- mm/mapping_dirty_helpers.c | 4 +- mm/memory.c | 35 +---- mm/migrate_device.c | 2 +- mm/mprotect.c | 2 +- mm/mremap.c | 5 +- mm/page_vma_mapped.c | 5 +- mm/pagewalk.c | 14 +- mm/pgtable-generic.c | 7 +- mm/userfaultfd.c | 5 +- mm/vmscan.c | 5 +- 22 files changed, 63 insertions(+), 300 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c index 988948d..82d3117 100644 --- a/arch/powerpc/mm/book3s64/hash_pgtable.c +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c @@ -195,7 +195,7 @@ unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!hash__pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!hash__pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(mm, pmdp)); #endif @@ -227,7 +227,6 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres VM_BUG_ON(address & ~HPAGE_PMD_MASK); VM_BUG_ON(pmd_trans_huge(*pmdp)); - VM_BUG_ON(pmd_devmap(*pmdp)); pmd = *pmdp; pmd_clear(pmdp); diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 3745425..de66682 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -63,7 +63,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, { int changed; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(vma->vm_mm, pmdp)); #endif changed = !pmd_same(*(pmdp), entry); @@ -83,7 +83,6 @@ int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address, { int changed; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pud_devmap(*pudp)); assert_spin_locked(pud_lockptr(vma->vm_mm, pudp)); #endif changed = !pud_same(*(pudp), entry); @@ -206,7 +205,7 @@ pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma, pmd_t pmd; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); VM_BUG_ON((pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && - !pmd_devmap(*pmdp)) || !pmd_present(*pmdp)); + !pmd_present(*pmdp)); pmd = pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp); /* * if it not a fullmm flush, then we can possibly end up converting @@ -224,8 +223,7 @@ pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma, pud_t pud; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); - VM_BUG_ON((pud_present(*pudp) && !pud_devmap(*pudp)) || - !pud_present(*pudp)); + VM_BUG_ON(!pud_present(*pudp)); pud = pudp_huge_get_and_clear(vma->vm_mm, addr, pudp); /* * if it not a fullmm flush, then we can possibly end up converting diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 311e211..f0b606d 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1412,7 +1412,7 @@ unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned long add unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!radix__pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!radix__pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(mm, pmdp)); #endif @@ -1429,7 +1429,7 @@ unsigned long radix__pud_hugepage_update(struct mm_struct *mm, unsigned long add unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pud_devmap(*pudp)); + WARN_ON(!pud_trans_huge(*pudp)); assert_spin_locked(pud_lockptr(mm, pudp)); #endif @@ -1447,7 +1447,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre VM_BUG_ON(address & ~HPAGE_PMD_MASK); VM_BUG_ON(radix__pmd_trans_huge(*pmdp)); - VM_BUG_ON(pmd_devmap(*pmdp)); /* * khugepaged calls this for normal pmd */ diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 61df5ae..dfaa9fd 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -509,7 +509,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea, return NULL; #endif - if (pmd_trans_huge(pmd) || pmd_devmap(pmd)) { + if (pmd_trans_huge(pmd)) { if (is_thp) *is_thp = true; ret_pte = (pte_t *)pmdp; diff --git a/fs/dax.c b/fs/dax.c index 139891f..70d82cc 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1922,7 +1922,7 @@ static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, * the PTE we need to set up. If so just return and the fault will be * retried. */ - if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) { + if (pmd_trans_huge(*vmf->pmd)) { ret = VM_FAULT_NOPAGE; goto unlock_entry; } @@ -2043,8 +2043,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, * the PMD we need to set up. If so just return and the fault will be * retried. */ - if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) && - !pmd_devmap(*vmf->pmd)) { + if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd)) { ret = 0; goto unlock_entry; } diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 7c0bd0b..c52b91f 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -304,7 +304,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, goto out; ret = false; - if (!pmd_present(_pmd) || pmd_devmap(_pmd)) + if (!pmd_present(_pmd) || vma_is_dax(vmf->vma)) goto out; if (pmd_trans_huge(_pmd)) { diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4ad9aa7..cf83981 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -368,8 +368,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, #define split_huge_pmd(__vma, __pmd, __address) \ do { \ pmd_t *____pmd = (__pmd); \ - if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd) \ - || pmd_devmap(*____pmd)) \ + if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd)) \ __split_huge_pmd(__vma, __pmd, __address, \ false, NULL); \ } while (0) @@ -395,8 +394,7 @@ change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, #define split_huge_pud(__vma, __pud, __address) \ do { \ pud_t *____pud = (__pud); \ - if (pud_trans_huge(*____pud) \ - || pud_devmap(*____pud)) \ + if (pud_trans_huge(*____pud)) \ __split_huge_pud(__vma, __pud, __address); \ } while (0) @@ -419,7 +417,7 @@ static inline int is_swap_pmd(pmd_t pmd) static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) return __pmd_trans_huge_lock(pmd, vma); else return NULL; @@ -427,7 +425,7 @@ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { - if (pud_trans_huge(*pud) || pud_devmap(*pud)) + if (pud_trans_huge(*pud)) return __pud_trans_huge_lock(pud, vma); else return NULL; diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 94d267d..00e4a06 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1635,7 +1635,7 @@ static inline int pud_trans_unstable(pud_t *pud) defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) pud_t pudval = READ_ONCE(*pud); - if (pud_none(pudval) || pud_trans_huge(pudval) || pud_devmap(pudval)) + if (pud_none(pudval) || pud_trans_huge(pudval)) return 1; if (unlikely(pud_bad(pudval))) { pud_clear_bad(pud); diff --git a/mm/gup.c b/mm/gup.c index d6575ed..95be530 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -678,31 +678,9 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, return NULL; pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - - if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && - pud_devmap(pud)) { - /* - * device mapped pages can only be returned if the caller - * will manage the page reference count. - * - * At least one of FOLL_GET | FOLL_PIN must be set, so - * assert that here: - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pudp, flags & FOLL_WRITE); - - ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap); - if (!ctx->pgmap) - return ERR_PTR(-EFAULT); - } - page = pfn_to_page(pfn); - if (!pud_devmap(pud) && !pud_write(pud) && - gup_must_unshare(vma, flags, page)) + if (!pud_write(pud) && gup_must_unshare(vma, flags, page)) return ERR_PTR(-EMLINK); ret = try_grab_folio(page_folio(page), 1, flags); @@ -861,8 +839,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, page = vm_normal_page(vma, address, pte); /* - * We only care about anon pages in can_follow_write_pte() and don't - * have to worry about pte_devmap() because they are never anon. + * We only care about anon pages in can_follow_write_pte(). */ if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, page, vma, flags)) { @@ -870,18 +847,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, goto out; } - if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) { - /* - * Only return device mapping pages in the FOLL_GET or FOLL_PIN - * case since they are only valid while holding the pgmap - * reference. - */ - *pgmap = get_dev_pagemap(pte_pfn(pte), *pgmap); - if (*pgmap) - page = pte_page(pte); - else - goto no_page; - } else if (unlikely(!page)) { + if (unlikely(!page)) { if (flags & FOLL_DUMP) { /* Avoid special (like zero) pages in core dumps */ page = ERR_PTR(-EFAULT); @@ -963,14 +929,6 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) return no_page_table(vma, flags, address); - if (pmd_devmap(pmdval)) { - ptl = pmd_lock(mm, pmd); - page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); - spin_unlock(ptl); - if (page) - return page; - return no_page_table(vma, flags, address); - } if (likely(!pmd_leaf(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); @@ -2892,7 +2850,7 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, int *nr) { struct dev_pagemap *pgmap = NULL; - int nr_start = *nr, ret = 0; + int ret = 0; pte_t *ptep, *ptem; ptem = ptep = pte_offset_map(&pmd, addr); @@ -2916,16 +2874,7 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, if (!pte_access_permitted(pte, flags & FOLL_WRITE)) goto pte_unmap; - if (pte_devmap(pte)) { - if (unlikely(flags & FOLL_LONGTERM)) - goto pte_unmap; - - pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); - if (unlikely(!pgmap)) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - goto pte_unmap; - } - } else if (pte_special(pte)) + if (pte_special(pte)) goto pte_unmap; VM_BUG_ON(!pfn_valid(pte_pfn(pte))); @@ -2996,91 +2945,6 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, } #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ -#if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) -static int gup_fast_devmap_leaf(unsigned long pfn, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, int *nr) -{ - int nr_start = *nr; - struct dev_pagemap *pgmap = NULL; - - do { - struct folio *folio; - struct page *page = pfn_to_page(pfn); - - pgmap = get_dev_pagemap(pfn, pgmap); - if (unlikely(!pgmap)) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - - folio = try_grab_folio_fast(page, 1, flags); - if (!folio) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - folio_set_referenced(folio); - pages[*nr] = page; - (*nr)++; - pfn++; - } while (addr += PAGE_SIZE, addr != end); - - put_dev_pagemap(pgmap); - return addr == end; -} - -static int gup_fast_devmap_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - unsigned long fault_pfn; - int nr_start = *nr; - - fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) - return 0; - - if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - return 0; - } - return 1; -} - -static int gup_fast_devmap_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - unsigned long fault_pfn; - int nr_start = *nr; - - fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) - return 0; - - if (unlikely(pud_val(orig) != pud_val(*pudp))) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - return 0; - } - return 1; -} -#else -static int gup_fast_devmap_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - BUILD_BUG(); - return 0; -} - -static int gup_fast_devmap_pud_leaf(pud_t pud, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - BUILD_BUG(); - return 0; -} -#endif - static int gup_fast_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) @@ -3095,13 +2959,6 @@ static int gup_fast_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, if (pmd_special(orig)) return 0; - if (pmd_devmap(orig)) { - if (unlikely(flags & FOLL_LONGTERM)) - return 0; - return gup_fast_devmap_pmd_leaf(orig, pmdp, addr, end, flags, - pages, nr); - } - page = pmd_page(orig); refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr); @@ -3142,13 +2999,6 @@ static int gup_fast_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, if (pud_special(orig)) return 0; - if (pud_devmap(orig)) { - if (unlikely(flags & FOLL_LONGTERM)) - return 0; - return gup_fast_devmap_pud_leaf(orig, pudp, addr, end, flags, - pages, nr); - } - page = pud_page(orig); refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr); @@ -3187,8 +3037,6 @@ static int gup_fast_pgd_leaf(pgd_t orig, pgd_t *pgdp, unsigned long addr, if (!pgd_access_permitted(orig, flags & FOLL_WRITE)) return 0; - BUILD_BUG_ON(pgd_devmap(orig)); - page = pgd_page(orig); refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr); diff --git a/mm/hmm.c b/mm/hmm.c index 082f7b7..285578e 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -298,7 +298,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, * fall through and treat it like a normal page. */ if (!vm_normal_page(walk->vma, addr, pte) && - !pte_devmap(pte) && !is_zero_pfn(pte_pfn(pte))) { if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) { pte_unmap(ptep); @@ -351,7 +350,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); } - if (pmd_devmap(pmd) || pmd_trans_huge(pmd)) { + if (pmd_trans_huge(pmd)) { /* * No need to take pmd_lock here, even if some other thread * is splitting the huge pmd we will get that event through @@ -362,7 +361,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, * values. */ pmd = pmdp_get_lockless(pmdp); - if (!pmd_devmap(pmd) && !pmd_trans_huge(pmd)) + if (!pmd_trans_huge(pmd)) goto again; return hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); @@ -429,7 +428,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, return hmm_vma_walk_hole(start, end, -1, walk); } - if (pud_leaf(pud) && pud_devmap(pud)) { + if (pud_leaf(pud) && vma_is_dax(walk->vma)) { unsigned long i, npages, pfn; unsigned int required_fault; unsigned long *hmm_pfns; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 44672d3..e03d71f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1398,10 +1398,7 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, } entry = pmd_mkhuge(pfn_t_pmd(pfn, prot)); - if (pfn_t_devmap(pfn)) - entry = pmd_mkdevmap(entry); - else - entry = pmd_mkspecial(entry); + entry = pmd_mkspecial(entry); if (write) { entry = pmd_mkyoung(pmd_mkdirty(entry)); entry = maybe_pmd_mkwrite(entry, vma); @@ -1440,8 +1437,6 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) * but we need to be consistent with PTEs and architectures that * can't support a 'special' bit. */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && - !pfn_t_devmap(pfn)); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == (VM_PFNMAP|VM_MIXEDMAP)); BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); @@ -1536,10 +1531,7 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, } entry = pud_mkhuge(pfn_t_pud(pfn, prot)); - if (pfn_t_devmap(pfn)) - entry = pud_mkdevmap(entry); - else - entry = pud_mkspecial(entry); + entry = pud_mkspecial(entry); if (write) { entry = pud_mkyoung(pud_mkdirty(entry)); entry = maybe_pud_mkwrite(entry, vma); @@ -1570,8 +1562,6 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) * but we need to be consistent with PTEs and architectures that * can't support a 'special' bit. */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && - !pfn_t_devmap(pfn)); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == (VM_PFNMAP|VM_MIXEDMAP)); BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); @@ -1645,46 +1635,6 @@ void touch_pmd(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pmd(vma, addr, pmd); } -struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pmd_pfn(*pmd); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pmd_lockptr(mm, pmd)); - - if (flags & FOLL_WRITE && !pmd_write(*pmd)) - return NULL; - - if (pmd_present(*pmd) && pmd_devmap(*pmd)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PMD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - ret = try_grab_folio(page_folio(page), 1, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) @@ -1836,7 +1786,7 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, ret = -EAGAIN; pud = *src_pud; - if (unlikely(!pud_trans_huge(pud) && !pud_devmap(pud))) + if (unlikely(!pud_trans_huge(pud))) goto out_unlock; /* @@ -2678,8 +2628,7 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { spinlock_t *ptl; ptl = pmd_lock(vma->vm_mm, pmd); - if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || - pmd_devmap(*pmd))) + if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd))) return ptl; spin_unlock(ptl); return NULL; @@ -2696,7 +2645,7 @@ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) spinlock_t *ptl; ptl = pud_lock(vma->vm_mm, pud); - if (likely(pud_trans_huge(*pud) || pud_devmap(*pud))) + if (likely(pud_trans_huge(*pud))) return ptl; spin_unlock(ptl); return NULL; @@ -2746,7 +2695,7 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, VM_BUG_ON(haddr & ~HPAGE_PUD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PUD_SIZE, vma); - VM_BUG_ON(!pud_trans_huge(*pud) && !pud_devmap(*pud)); + VM_BUG_ON(!pud_trans_huge(*pud)); count_vm_event(THP_SPLIT_PUD); @@ -2780,7 +2729,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, (address & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl = pud_lock(vma->vm_mm, pud); - if (unlikely(!pud_trans_huge(*pud) && !pud_devmap(*pud))) + if (unlikely(!pud_trans_huge(*pud))) goto out; __split_huge_pud_locked(vma, pud, range.start); @@ -2853,8 +2802,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); - VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd) - && !pmd_devmap(*pmd)); + VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)); count_vm_event(THP_SPLIT_PMD); @@ -3071,8 +3019,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, * require a folio to check the PMD against. Otherwise, there * is a risk of replacing the wrong folio. */ - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || - is_pmd_migration_entry(*pmd)) { + if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd)) { if (folio && folio != pmd_folio(*pmd)) return; __split_huge_pmd_locked(vma, pmd, address, freeze); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 99dc995..aedef75 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -957,8 +957,6 @@ static inline int check_pmd_state(pmd_t *pmd) return SCAN_PMD_NULL; if (pmd_trans_huge(pmde)) return SCAN_PMD_MAPPED; - if (pmd_devmap(pmde)) - return SCAN_PMD_NULL; if (pmd_bad(pmde)) return SCAN_PMD_NULL; return SCAN_SUCCEED; diff --git a/mm/mapping_dirty_helpers.c b/mm/mapping_dirty_helpers.c index 2f8829b..208b428 100644 --- a/mm/mapping_dirty_helpers.c +++ b/mm/mapping_dirty_helpers.c @@ -129,7 +129,7 @@ static int wp_clean_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long end, pmd_t pmdval = pmdp_get_lockless(pmd); /* Do not split a huge pmd, present or migrated */ - if (pmd_trans_huge(pmdval) || pmd_devmap(pmdval)) { + if (pmd_trans_huge(pmdval)) { WARN_ON(pmd_write(pmdval) || pmd_dirty(pmdval)); walk->action = ACTION_CONTINUE; } @@ -152,7 +152,7 @@ static int wp_clean_pud_entry(pud_t *pud, unsigned long addr, unsigned long end, pud_t pudval = READ_ONCE(*pud); /* Do not split a huge pud */ - if (pud_trans_huge(pudval) || pud_devmap(pudval)) { + if (pud_trans_huge(pudval)) { WARN_ON(pud_write(pudval) || pud_dirty(pudval)); walk->action = ACTION_CONTINUE; } diff --git a/mm/memory.c b/mm/memory.c index fa0d7b8..506fa23 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -603,16 +603,6 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, return NULL; if (is_zero_pfn(pfn)) return NULL; - if (pte_devmap(pte)) - /* - * NOTE: New users of ZONE_DEVICE will not set pte_devmap() - * and will have refcounts incremented on their struct pages - * when they are inserted into PTEs, thus they are safe to - * return here. Legacy ZONE_DEVICE pages that set pte_devmap() - * do not have refcounts. Example of legacy ZONE_DEVICE is - * MEMORY_DEVICE_FS_DAX type in pmem or virtio_fs drivers. - */ - return NULL; print_bad_pte(vma, addr, pte, NULL); return NULL; @@ -690,8 +680,6 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, } } - if (pmd_devmap(pmd)) - return NULL; if (is_huge_zero_pmd(pmd)) return NULL; if (unlikely(pfn > highest_memmap_pfn)) @@ -1245,8 +1233,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pmd = pmd_offset(src_pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd) - || pmd_devmap(*src_pmd)) { + if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, @@ -1282,7 +1269,7 @@ copy_pud_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pud = pud_offset(src_p4d, addr); do { next = pud_addr_end(addr, end); - if (pud_trans_huge(*src_pud) || pud_devmap(*src_pud)) { + if (pud_trans_huge(*src_pud)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PUD_SIZE, src_vma); @@ -1797,7 +1784,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) { if (next - addr != HPAGE_PMD_SIZE) __split_huge_pmd(vma, pmd, addr, false, NULL); else if (zap_huge_pmd(tlb, vma, pmd, addr)) { @@ -1839,7 +1826,7 @@ static inline unsigned long zap_pud_range(struct mmu_gather *tlb, pud = pud_offset(p4d, addr); do { next = pud_addr_end(addr, end); - if (pud_trans_huge(*pud) || pud_devmap(*pud)) { + if (pud_trans_huge(*pud)) { if (next - addr != HPAGE_PUD_SIZE) { mmap_assert_locked(tlb->mm); split_huge_pud(vma, pud, addr); @@ -2462,10 +2449,7 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, } /* Ok, finally just insert the thing.. */ - if (pfn_t_devmap(pfn)) - entry = pte_mkdevmap(pfn_t_pte(pfn, prot)); - else - entry = pte_mkspecial(pfn_t_pte(pfn, prot)); + entry = pte_mkspecial(pfn_t_pte(pfn, prot)); if (mkwrite) { entry = pte_mkyoung(entry); @@ -2576,8 +2560,6 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn, bool mkwrite) /* these checks mirror the abort conditions in vm_normal_page */ if (vma->vm_flags & VM_MIXEDMAP) return true; - if (pfn_t_devmap(pfn)) - return true; if (pfn_t_special(pfn)) return true; if (is_zero_pfn(pfn_t_to_pfn(pfn))) @@ -2609,8 +2591,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, * than insert_pfn). If a zero_pfn were inserted into a VM_MIXEDMAP * without pte special, it would there be refcounted as a normal page. */ - if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && - !pfn_t_devmap(pfn) && pfn_t_valid(pfn)) { + if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pfn_t_valid(pfn)) { struct page *page; /* @@ -6042,7 +6023,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, pud_t orig_pud = *vmf.pud; barrier(); - if (pud_trans_huge(orig_pud) || pud_devmap(orig_pud)) { + if (pud_trans_huge(orig_pud)) { /* * TODO once we support anonymous PUDs: NUMA case and @@ -6083,7 +6064,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, pmd_migration_entry_wait(mm, vmf.pmd); return 0; } - if (pmd_trans_huge(vmf.orig_pmd) || pmd_devmap(vmf.orig_pmd)) { + if (pmd_trans_huge(vmf.orig_pmd)) { if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) return do_huge_pmd_numa_page(&vmf); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 2209070..a721e0d 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -599,7 +599,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, pmdp = pmd_alloc(mm, pudp, addr); if (!pmdp) goto abort; - if (pmd_trans_huge(*pmdp) || pmd_devmap(*pmdp)) + if (pmd_trans_huge(*pmdp)) goto abort; if (pte_alloc(mm, pmdp)) goto abort; diff --git a/mm/mprotect.c b/mm/mprotect.c index 516b1d8..31055a8 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -384,7 +384,7 @@ static inline long change_pmd_range(struct mmu_gather *tlb, goto next; _pmd = pmdp_get_lockless(pmd); - if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd) || pmd_devmap(_pmd)) { + if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd)) { if ((next - addr != HPAGE_PMD_SIZE) || pgtable_split_needed(vma, cp_flags)) { __split_huge_pmd(vma, pmd, addr, false, NULL); diff --git a/mm/mremap.c b/mm/mremap.c index 6047341..96fff18 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -603,7 +603,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr); if (!new_pud) break; - if (pud_trans_huge(*old_pud) || pud_devmap(*old_pud)) { + if (pud_trans_huge(*old_pud)) { if (extent == HPAGE_PUD_SIZE) { move_pgt_entry(HPAGE_PUD, vma, old_addr, new_addr, old_pud, new_pud, need_rmap_locks); @@ -625,8 +625,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (!new_pmd) break; again: - if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || - pmd_devmap(*old_pmd)) { + if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd)) { if (extent == HPAGE_PMD_SIZE && move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd, new_pmd, need_rmap_locks)) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 81839a9..18eadc5 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -242,8 +242,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = pmdp_get_lockless(pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde) || - (pmd_present(pmde) && pmd_devmap(pmde))) { + if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; if (!pmd_present(pmde)) { @@ -258,7 +257,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); return true; } - if (likely(pmd_trans_huge(pmde) || pmd_devmap(pmde))) { + if (likely(pmd_trans_huge(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); if (!check_pmd(pmd_pfn(pmde), pvmw)) diff --git a/mm/pagewalk.c b/mm/pagewalk.c index e478777..a85c331 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -143,8 +143,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, * We are ONLY installing, so avoid unnecessarily * splitting a present huge page. */ - if (pmd_present(*pmd) && - (pmd_trans_huge(*pmd) || pmd_devmap(*pmd))) + if (pmd_present(*pmd) && pmd_trans_huge(*pmd)) continue; } @@ -210,8 +209,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, * We are ONLY installing, so avoid unnecessarily * splitting a present huge page. */ - if (pud_present(*pud) && - (pud_trans_huge(*pud) || pud_devmap(*pud))) + if (pud_present(*pud) && pud_trans_huge(*pud)) continue; } @@ -872,7 +870,7 @@ struct folio *folio_walk_start(struct folio_walk *fw, * TODO: FW_MIGRATION support for PUD migration entries * once there are relevant users. */ - if (!pud_present(pud) || pud_devmap(pud) || pud_special(pud)) { + if (!pud_present(pud) || pud_special(pud)) { spin_unlock(ptl); goto not_found; } else if (!pud_leaf(pud)) { @@ -884,6 +882,12 @@ struct folio *folio_walk_start(struct folio_walk *fw, * support PUD mappings in VM_PFNMAP|VM_MIXEDMAP VMAs. */ page = pud_page(pud); + + if (is_device_dax_page(page)) { + spin_unlock(ptl); + goto not_found; + } + goto found; } diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 5a882f2..567e2d0 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -139,8 +139,7 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, { pmd_t pmd; VM_BUG_ON(address & ~HPAGE_PMD_MASK); - VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && - !pmd_devmap(*pmdp)); + VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp)); pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return pmd; @@ -153,7 +152,7 @@ pud_t pudp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, pud_t pud; VM_BUG_ON(address & ~HPAGE_PUD_MASK); - VM_BUG_ON(!pud_trans_huge(*pudp) && !pud_devmap(*pudp)); + VM_BUG_ON(!pud_trans_huge(*pudp)); pud = pudp_huge_get_and_clear(vma->vm_mm, address, pudp); flush_pud_tlb_range(vma, address, address + HPAGE_PUD_SIZE); return pud; @@ -293,7 +292,7 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) *pmdvalp = pmdval; if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval))) goto nomap; - if (unlikely(pmd_trans_huge(pmdval) || pmd_devmap(pmdval))) + if (unlikely(pmd_trans_huge(pmdval))) goto nomap; if (unlikely(pmd_bad(pmdval))) { pmd_clear_bad(pmd); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 4527c38..a03c6f1 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -790,8 +790,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, * (This includes the case where the PMD used to be THP and * changed back to none after __pte_alloc().) */ - if (unlikely(!pmd_present(dst_pmdval) || pmd_trans_huge(dst_pmdval) || - pmd_devmap(dst_pmdval))) { + if (unlikely(!pmd_present(dst_pmdval) || pmd_trans_huge(dst_pmdval))) { err = -EEXIST; break; } @@ -1694,7 +1693,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, ptl = pmd_trans_huge_lock(src_pmd, src_vma); if (ptl) { - if (pmd_devmap(*src_pmd)) { + if (vma_is_dax(src_vma)) { spin_unlock(ptl); err = -ENOENT; break; diff --git a/mm/vmscan.c b/mm/vmscan.c index 39886f4..b0e25d1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3366,7 +3366,7 @@ static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned if (!pte_present(pte) || is_zero_pfn(pfn)) return -1; - if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) + if (WARN_ON_ONCE(pte_special(pte))) return -1; if (!pte_young(pte) && !mm_has_notifiers(vma->vm_mm)) @@ -3391,9 +3391,6 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned if (!pmd_present(pmd) || is_huge_zero_pmd(pmd)) return -1; - if (WARN_ON_ONCE(pmd_devmap(pmd))) - return -1; - if (!pmd_young(pmd) && !mm_has_notifiers(vma->vm_mm)) return -1;