From patchwork Thu Jun 27 00:54:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713619 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2086.outbound.protection.outlook.com [40.107.95.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58EC117E9 for ; Thu, 27 Jun 2024 00:54:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.86 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449685; cv=fail; b=dg+E/sans8fJMAdy/a9bKt2blk7Ga5EDdpGbsefF24J9lquDSuGGoBRprT+zyB2PqSUAkKJwPlUU33mymPhpaj7haDIgKeXVmS+zATIYTz9GpYu6ywhTndyGk/5kMpEL0/8KkEGr8fH3RBpG8Rq+GKw+axPNBhVnQUb/Cob5688= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449685; c=relaxed/simple; bh=ezXwaLU5pUSHub61ziUXZlhgEE4Awmqgl8ZxhB/w5JI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=kDmn+1tx2J7cLhNbMqxPnG7lp5gKZz087Jl49sfzZW5geV+QrQpUIHJMFhhwOhstLjdcpzxw/W8EU7bkJKFizmOeV36hbpaYGqqF/9B6l80lzvXaBJO/hKvGdec8a4CGfYIWoqbDxY0si1rMT3SwRkXakFbyrvuYwJ6qi6lYyx0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=azdCAhwq; arc=fail smtp.client-ip=40.107.95.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="azdCAhwq" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k0ywJKm8tCOaxBYYLhd0QwDA160ctJ+GR0y2S96seU2joAQNBWsZ4m3R1TOsiGVk+QlwF+bnjuaQO4OsD/rX26k4SpAesrw+bFW+fdLkgXrKbGVpSEvGkHBQBeXe6Q0tUAUvqEYMljcyrFax578azMIAIQO36IaX1AnleTZ7JnHh3ATyiVdi6XkoHoijX79CkOkTzNXxOE5t3eZLuBDKOImDGlq8vb/lpv9ViZnQLYB/2toQ8gN98I/DZ+sRbwEZK8HzTSfx1xuNcvFvAH7Sr4wpIl9Sp//jZhJFkGfpeaVMbfpJtm319DetFCa4itVLrTloScdLOR4H8o0Y2kzt8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mK3Q7dledyN8DAN2X2wWxUAB8Y4QPRw76IMsraD1iJc=; b=C/14CJT/AAC8R4q5ui3bTeupnvA0go0rJXuPLyCXleArk+J8XkXxMMqcgIPnpQsv35gie4yM6wYR+9n5b1G2jyC/5hqHs2R0Dcq23DUbtcAaKaQ6OzsS0fI1EJaWTmfWwy5N7zsI65tJVVpcpas5RGzkTvUwdbDUX29F8UA1DuXWgVLbKFyvCKcTdbpj7dC2c9wre1zqbzZEgJaWseKUc36tVf3xBm4evzbwVxG7idBOSHiCFgr41t2p9u9WYL8YYqwX/gNIf5pYMcJ7OMnHSi6NZrJlJzR+3mpzdkcsBr2VePZU4dlRTeMyW6ShAifvKXgQfSk+k1QZHvpp7ypamg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mK3Q7dledyN8DAN2X2wWxUAB8Y4QPRw76IMsraD1iJc=; b=azdCAhwqDDVV+99DFwCGxQB2Mo9rQLlBSrhaqYBN7hlerPTasyXvR6CbhCH86ykFFXqtx3jUwSUo7fcPfXs9XP8r3B30cCEqWeuVyS2xjeX86ZSkqSv1ynVCGBHRHqWFepYLtfa2i7f7qNLmF0xlF0D19+LEJ6al0JAqHYAHgV95KVndqshDF5EGaEonMUBquiKq/dTF/A0FDF8HmDn27mO1YB2vVYdnDivmibEKwBPPZAz63HIZtsJ+X+SiI10gP/TTF4QuMCbsaRja3BC6DK8b62Pg6ABpUxd+S8Fb9Gvjv8omAHfgaCj/2eQ9OsxNCPo5J4gaVOY93PLsq+Tlew== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by MW6PR12MB7071.namprd12.prod.outlook.com (2603:10b6:303:238::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:54:40 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:54:40 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple , Jason Gunthorpe Subject: [PATCH 01/13] mm/gup.c: Remove redundant check for PCI P2PDMA page Date: Thu, 27 Jun 2024 10:54:16 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0074.ausprd01.prod.outlook.com (2603:10c6:10:110::7) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|MW6PR12MB7071:EE_ X-MS-Office365-Filtering-Correlation-Id: 9df7d926-9ecb-4372-63bb-08dc9643b98b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: 6nNs6ILqUaKyZAxJsQmBL66HeVKyScdfhQi9T12cqftQM4kofG82Q87PnoUldyB4G18qa/cV7Pm50/9t3DhNYrpn4knN0bOIg4KhdQfrPoR00qM4h6N7iz2GVdorE8fHFv8S6jf05wSDgBfA++zUF44piHjFKcsW0Z9jor8TyYB7Trfua+ioZDQkYbyKzTTyx1Cwrq+7vvI0dfS2qndKQwgjBzgtdaZ7yw1vUAIxq9xGRLnilpCluRAcnpXg17dpXWfTC1p+kn2GtYfevsMRVxbT1K0UMzp4+pdO7KpFo63WgbSNxctI9p7trzrRmjrgDHRBSMNHaJY1tdV3UrCjXv5SESp/y7YbthYxsgyl8x8mdEJ//s5e8Glyend6IbPdmlKm9xGTNEwhuHqPMXf6qOEJIiYpDaF8QXUwr8jkHtEbzujIjIEH0Jg+nUftGFiyNFj/nSpmOje3bTSzDxIIU9gFD2B4J0bELtf/vTyhz+1/F9hm8JcjFHnyAHpKQNbLPPvGbvOlrivDIgfveVnccbfhexpbwahVKNeTwKlqww7mUMw1ewUd5HvNwz4N1yQlkidQztmcYqONiATCwZsOofGUtTsp46FYsV84FBjhf4avPKfczn7xHXtUXu4Ihw4A95LM8Doez2JHGEAAKDnezz4wNaTbQyeahHMg3caB9dCuhW8ChlySpArjGorE8qjJywA/gQtWChZOdZHndqCGAHnSItfIMb4b7X1WA77DJcpy/MiBRNI4AVfyb6dfUFE2JsTew+t5j6wI26xu+OepO0L+wEm1TAJVSAyHqjWIQFl6+CQk+jFfYd97kQWHqfdqN6JPuVfpR+sdD6tVK45JWPpIP0a4KXFs15yQYCxI9kY3VmMfk/QzAVa8oPRULT8HK7ipI6kolC5wcDjvf3TREYc6/xYyXSLF/1ysDf/KFiaLKpTTGe8DS3hR4ZbwH9cDGxeqrl1LCdkQnYzc+bQVBwHfHmdGKAm719TbSrd6B/cu4dpbzzbgtzqfNVLcTRJey3/76z+hVDWbthl6RBdSmURW3KJriCSXeW/tcP8wawmOZ2zS5fmV1gJhEg3j533vi2r7qaOF1R9G+5awf6pjdshpjiIYdVEdbeV3d8bMz/DR4OHf8fJm1B8qTyiakx7fb8+AapvNwdPHP+1UA9fB+J57dWR5ktnRagtkQA/+rzJhncEsgxrFniwcGDvhzGLspC3jqCsPP1lYuVHYvGFBEnUHOArD2WRfARluCe+VTcqGCXuaRVKAaHvMMmREn1Sb32g8pxqkzaFzo2zbBexcbAtAggb5sXbtUbiB5YVmgYHYJIxSVeE2bCgMWxFPXcHcROJxPFeKpAd31JBr5Ge7EA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: UteYh4NOQXgbxGq7QLU3zclGd2B9yHd/Bp0zRVnUs/dkbCFoOMgK1uPwKtkhUz5De6d2xtBNxqawfvuCcm7dIDNRntKH2nfzn55WJKvgCjZs0WJ5Vzz1Pi5kgYi8Lsx4/es6Eq6jZkymJC8m+KDd1zZ3h4rFaCsF8y8MPhy9dVXaZfQEYcIUlb+sE3Jz+k9jeAYQNxm9/WLPlA0JvotXG877lMGDq9zUYx5n5ctCmHHLFzecIo62mmcL2AsmCIJpMg53IkhkYjmlFmC2qSw047opdOOz6ywZKmoeI6qvLSUmXzj1OoCphQbVxI4tSHuF6KvAq1+PjmEwjdxeAVf1qR4T5/OcrLzyDwprHBqINWN0YjL3rx7TFA00UAPrt4YF23koTUHVv66mIRpXf/++LRrjBKn51pj+5FYVdeXcHUtTxFcBZU1Uq7Sf50qWBO1NTURkS645kmXekLRJad7M13x2inpBGcHkjg6IN+G0Fkg4l8dLsrKjEhzjlI7opDAko9kTLSmpT97U8Td3Mmsg5gwEvS6OehJqTOSjY7AuilEgSF7T/VwOCE/fXDKifW5s3uBXtBw4/VY0keJ7cmvW2knAg9VSCakON/X7c8p3nM98VAKujzOVGoNCkHp02oG5Fu9K2J+1hveI4lb8MD73FolmAQAWAHO9SlcmOsdxQ1hly9COqxpaDzBITh0uumFLsnLWBIMEi/mD+6HIvOKHCkr9UP/CBNi45Z0ipgTZaB76K26A14U4eNwt8G1BNaPySvQyLniTpPUVWZg04Ewt2k1V5f9RonRVeDhLIoSMdgYlPsHgjCtKYRZNKkmXMZaneQffXiux36TuORthexa0Ss6BOiHIs5IKVFVINMOoI5NqUaoQQEfgJwBrS0eJ1IyQXpdMZBVJO3TUhppbfbm8SmcVF9t9c/1kY8U4ki1KQEvqn/5SDK4r6In//zo8ExkOREtbrPHRSm3POmW8TJONPep+jE2v0pBxp/+YLdYEZuiBrtrujgJ859BJIXmsgL+dbmtL/pPF6Oq4O1xEozOmZln7Ok6VHDWbZwnP6RXh8+Ku8LH+UHEt4jk72OK1k4E/W/O6sGf3BUoFAh+/NpQNWSJLDveE0sGzn6JrPbsZzX6oRm+tfM/T5BUWFKpseoZ8xl009E35Pbh0N5BiLoT1glQ3apxV7nbpxWZ0hAraMJR41JqcstSr73dcAlzyQWw9pjjGCMmXwTtVzzSjaAnnFb6uIOtLdgtEghv/ENXSrtS110NjH8T431MDqGcL6hYTa80YFoPwkiIj2Kn4j3uaSIOjtgqWOE/s4F1X5vzi/lJFWq+t0+K9jfl+GEQOYgU4jOK4QjaERLvcSzt0uUDg79UFWHHvZkGFT5SV4Ycg5UAThpvMzFp7ANOrKQ0/SWfGsKjegEiFDO5YxFbIAIyTGEYq3PoFcC9JIU7JMEEy0OGg2eFKWtd02YO2Q+7cRoFWBupIGUmw7yyy083EgtGFff94FM6RMHrhWs+eTpQLK2FqLhJ4X1iej/9/FFC7u+EOQGOTKvNKYgW0+HKiac0fEeVWZ4tNYneWP2MV+b3Q17fNvileuGIrlz1X+a59+RVO X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9df7d926-9ecb-4372-63bb-08dc9643b98b X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:54:40.0985 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: aC+gBnmSsXydmQ0/aha6IR2B9KAibucjgnN+D0mr2yfnfMML9YSULxSlGPCTecII+jL/L2aHa6trh1ADAovo6A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB7071 PCI P2PDMA pages are not mapped with pXX_devmap PTEs therefore the check in __gup_device_huge() is redundant. Remove it Signed-off-by: Alistair Popple Reviewed-by: Jason Gunthorpe Acked-by: David Hildenbrand Acked-by: Dan Williams --- mm/gup.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ca0f5ce..669583e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3044,11 +3044,6 @@ static int gup_fast_devmap_leaf(unsigned long pfn, unsigned long addr, break; } - if (!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - folio = try_grab_folio(page, 1, flags); if (!folio) { gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); From patchwork Thu Jun 27 00:54:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713620 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2076.outbound.protection.outlook.com [40.107.95.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FFE7175AE for ; Thu, 27 Jun 2024 00:54:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449695; cv=fail; b=Mw2d694auY7696ENPCZxQitTUvJulUSfZpM0ggJy5yRdhzUz4IyBZEkjCWSG2LkWEhsRbZsKC4kUP1tjNDtXqw4kB8tUV9sifOtKWe/lDmi3EeBV4HstD9/Z3tbcDl3iNo6SP9PnwlRCxsRmDiM1FWq59O5uOY8oSEH9s2MHDmg= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449695; c=relaxed/simple; bh=l1kdMpo3D/x90es9Igc7Xr7Mzkbpng9uBujf0ud8HNw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=kDwo2ImGc6wDrTjV30iyNwLGwkM1Cqp3nXDCkS4gKvtrDiYhEKe7aOuLulw996s20by/Ui3lf2ZsHqvezAzDytr2hLKloOtP6HDNY1jDjeOyLeuCkVmhOsg6ZynH4XW5t4kQLg39TuCvbolRop64Cx7IsAGdA/tMVzWWXMvauJM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=MKbrl7oA; arc=fail smtp.client-ip=40.107.95.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="MKbrl7oA" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ise+SBxARSC6WeEfZY+AMCfhueZMC14HH/5SWS6IEv82/YZhMu/2x9KC835gKUlFzL6lkKvtiTeeZ2ppsoUMBc4emMlKbU34RMFbkWQ6pZK/e61wLeZALeSJWEXgpUlPnLYZlx7fVaar16e0ahcXeimfW0g8iyql6iSJVuhBdRhXlb5ymFuMpfh6SeFapBd45+kCYxb6HVuoYoKJAtcCLNZfDekDO1A4m8MbrjstH/55/RVwLh+pa331j39uYq0sQtnncAYfHmlvMM4iDvGN7oRJky8dl9dMtdj6Z2bThExt/s3SS7R6HXeHsrdCr7WYIh2XDfPx1iS153SlESVn1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hPINMew12P57zfDMgYtT5iZzwuOu+WVIYwPZgbZaNX4=; b=KnMUDCktLAUYRcVcN4FK1gWkMU8+c/6BCGeDNaIXUSrHCT8zdrgTy3TTX3kQdPaLONyqyANlH47NV9GxkTOMhy309ENEbAID37lIhGbjb3irQ23QzO8hK9ibBZx902a5IBKVXa4A88iruLzq1jGLiQz7qTSqeLaoBDlrQ/T2UXI8JInxGmo4qLZrNmFhDzlBMFgEQ1FUj3Yz6yQOO0q0Z7LIQLPfseewNoRak1UOeg9K28iFcywqXqAhFlVKIIVMV+h3bYLshS5p5BNjlLu7CNi29C60fOgfxB1uwAUUAruy1Goe+DURaAtzJtac0lxIwrcqun2lGXgZfVkrvWzPwA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hPINMew12P57zfDMgYtT5iZzwuOu+WVIYwPZgbZaNX4=; b=MKbrl7oAuTukBefV/i0h5Lox/fZzcnZphErp6EzXGQU4P48O1JF5tNblAbRroMtoTf1jIBYwlCEy4XwG4AOD6XUDxsAHWNayrwoVMp4xqWWmtW9F6oKepxDwg6uaoynUumpU+WQJyOacwJsfjtqjQyv/MGLxYkjpNnafqbSwK31fUgKVxlwEfIWtCutpVAJT2m93jIipE6NghjyxZRJjMcOffb2PUuvf9RkrxrC4Is1jtHZB1gqKKdJ0G9SJ6W062ewo+ghzLdYAO48dqq8maoNdzsds1ZjdQ3rzgy3RC0fr4cL7mV7fae98WVENhJsvgHjj4Yadk32g/HH8nkmPdA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by MW6PR12MB7071.namprd12.prod.outlook.com (2603:10b6:303:238::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:54:45 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:54:45 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 02/13] pci/p2pdma: Don't initialise page refcount to one Date: Thu, 27 Jun 2024 10:54:17 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY8P282CA0027.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:29b::25) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|MW6PR12MB7071:EE_ X-MS-Office365-Filtering-Correlation-Id: a088fd23-3d58-498c-b9df-08dc9643bc69 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: 2rnjoSObMFwM/09/5eFhgMLvM9L+tP1OpHzVCRUD+b//b3o+zAKbBR6M8Cxsi/1zpANC4vzOcld4Y1SwplXFQyZqGVqnnNzu1TQT94314P7IjxZATsYGrBUruBKy2pgcQXerLFwu+HmZz8jwgdCd3BL5n5p+k6z37SnB90gWWPY3yRof/eGDPXKVS2+BypI0F3XCl1MVJfbLtAJYuGTvl/rAVXGWKhnLlXCUb/ZR2pmIY92rSoMxDas3giww+mUF9rpPwDjAz9q9svCQ80CF3QwqpdZw96x2DDFlPPR2NfthYJdby34gv6Ss8pF2CTgOgF0ez3kyeabWD8ZosV5S2KunsFFa3OixLgo7YetBGaNU013R1WcgNhgrGldS4boXubhGyC3MREZO2c30HeIt4q3ywLNRVZvc86sUhuhE93EFxDigkeCQ+OERFAb6iTVYnYQCFlS0zcXZN+0wE62ztEpgEb28C35JPzRLch3l0NfJl4pf2FZa9kfnniXKzrzS/tY6TTVn6h+jJCPHnXkZrgVeqyKt2P0INQFaU04iGypW7uqUUdCLuGwbXrzmd+sXem1GJ2lHKE1bTlnFknpquo+nz9hIlqEF5APak7UxpCeyTRiXWLH1dOcKszUz6OjC0Owq0qT7Ns/8t2RvKTUJGoiSDBsij4QQR/NaMdyzotFKgj7kv4IA6pc3LL9PgI8Xu4BKAv8b/TJoV+rBxuy1wKscpblqkUeBwR1Ox8R+nzQfkrxPRBC+jSAvoN5sDnZbt0GFkOqtZPrpqZqZlO3+2j1SbdeRey6PvE1OBfUe/nOAJ8C4XB2UwBRuVJSRzvU7FrMzHN47LfVfmLNEYSGnC5+//nX7bThXEKUBqRDMLEbRG6UDUshXMEzHuCy2N19WC1x2/B+h9j77iK9Sh3hI/fp5TBNTpvj14SuZlS1UUq3y5VLuZp5Ndh9CcM+dxYOgUkgeLyePWoIJZw9Bj0EKXa+Xlej5myHPvfS2mTXn5YUfXdZiVfAk5fmohYSarTBWAw4YWGGOx/IXc/BhmzbRodDLig1+WUJ0ceH7D/HT799bHhuszaU0HafBkpdHUmRmhjdLImJ6FbxLMDrEgo8Evxg2E7VU3Ij6dP0zpN36amIjSrVanFpf9/q9b6vNwFRCAKW7EnI4p6WCdtfLI3eR33kk/DjJMkOKdR8dmJi+TVAr/PxnFmxUkjnId2BHydgeswm5KlsAAMRibFiE/+azmyn7sUQ8fbb7JEkvSUP8Usj6rHike0at9tF5/mzVe5LWnRfCpRNDTxtg2+SPJBayHbdv/Z+parbFb4tt4gKlRSZKsiebLtnmYCsKEOmZ1YzNuyx6dbck4aFmLZSupNoWBA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: dNGSkOh98qimkmddEaJLkyU01T88pufhRnad9b1L0rEZjeNZgJXMlUkh4pu0ch8AuZa3p/r+dvy35NbnSws6NbpVwdl7FwXGiwpR4ZC+4NGdvyyCSlTcHPL2fW+cuK0ZeRyqIOtdHLBloIqKe9XuW0289A7M+xEBCt3S0OjkqvG4D7aSdtmIalekRZEfq+lSWmWRz0yAiqzQWMgfv49mOPVnMYAC1BM3oV4VrlxY9Y1GdVg72AkmcXdkMItTdsuKJ+wycVw+Hz5eFADAqzXh1ekvUEFuJKPp0YGvnadXlRmoloWy8pNF0KWequOgsZ8X1D3xyADmIjaJBMNrRA2yMMiD0AE5MWLbl8Z3x8UgMk0v+00gb0q5FyX4glLGrhN3Q2Z0ZoanFBT87COJ3kXi89161ntPR9+niLzR0EFwZSLYVNCuPnQ9CYsNylEMHhJkYJz4SznBCYi4TZItUS4CQrcr+NssYaeyrunipkU5OEGZrogXK1cBOzX01gpU8rTfhr4jKhDlq9CMRCbHTjj7dVj67OawfgSefnpEqHOQBQEAxiadrnyZ4YBOo8OWL/IjUKv26Qo1cCfG9R7SPw8suV9YqGidAeVv1tz8URy1Hjn8cXbbXRzth9gnUuXmbcsB0IrhVshplwwl+Oe/Bbvq+lK6WtTARUSBUuTgEbXK+98kvTn3BohIIa1PJremGGGi0Y9wxVoqqwRV2jbjF7eUAIhs71n4H9fKRgLXRDdUcNhPdg7I3wDp5Zizvv3DmaxnLcCANzuKtdCSIUMaKjEiyruZ7+G2CELx7cGGlUfVa+tN/EIg7uLTRzO9fsvJmaPK4yIc1iOGn78xdhEYMjbzGvjiSH3A0o+Bx7/vWJbhtHox771Lqqr7KoaU6+FQOt4W8QXpTIW0KqMXhm6KVShYoX4BIragyvORWAsmKyWe9OOxyP0S+ZECu54zDbP9TOhvOm/Mosm63VOn5kxbajEdPb+uzYZXRR9keolLJuPafvWJ8ddn3IdM1+tQ0hJL4YKZp7eHkT2DumqDIB4rsuQIo0+0sYAPzmSqObTXQyKv67bASkdLe++/wFMfbK4wzwCs5e6Ql8+r3F0jfgOo8wr4gC44WVtNvHuKD4BFRaS7+vKqGcjJIidm9to143/q53bCZAUP7A3Ihj52A9UNx/OhKNfs5k8VoYmQPd3htfDhRv34QO0TkG7YcLdbh/T31SRuSjR7rz+cSbQRz2+Onpk5i12LQ5dcRGp8gKreipLyByuK2GT9vONNUKCZj1q+CAtprWyhFbfnN//g9sxrTwemlj1mOBYxP+LuJzI14nCpE/QjqYZtkHrhiEvp8fAtcA46VSQxlXCJzHgrtZ9E3L/fM2LA67hcdBHErPy/6ccAXuZWMRJFjvBAPJHbOawWAHl4FJtofKE0kwmAGaNYit1ecXwHndyFj7avsK3pvKbcIjlzb/BFJF13aD8L3O1eiJ+SLWFQodweT2l96gKBTdkb4kzODF0J+V4XZm2cng1VSscvl56QTyKSeJxAzbuu7jN2uoW3Gz0k1BhAcAOsOL52AoClBqCRj4rjK4TtnMBjNJv+MQuxVC0pVqKBDJGwdDIE X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: a088fd23-3d58-498c-b9df-08dc9643bc69 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:54:45.0710 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: kQwjgjMduH9IypZxLKV8hTKTor7kJVJeIzIw5djaLarYMfmJzrqPzmexitrN5JA/pccHPBErrEpxegO3X/nq+w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB7071 The reference counts for ZONE_DEVICE private pages should be initialised by the driver when the page is actually allocated by the driver allocator, not when they are first created. This is currently the case for MEMORY_DEVICE_PRIVATE and MEMORY_DEVICE_COHERENT pages but not MEMORY_DEVICE_PCI_P2PDMA pages so fix that up. Signed-off-by: Alistair Popple --- drivers/pci/p2pdma.c | 2 ++ mm/memremap.c | 8 ++++---- mm/mm_init.c | 4 +++- 3 files changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 4f47a13..1e9ea32 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -128,6 +128,8 @@ static int p2pmem_alloc_mmap(struct file *filp, struct kobject *kobj, goto out; } + set_page_count(virt_to_page(kaddr), 1); + /* * vm_insert_page() can sleep, so a reference is taken to mapping * such that rcu_read_unlock() can be done before inserting the diff --git a/mm/memremap.c b/mm/memremap.c index 40d4547..caccbd8 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -488,15 +488,15 @@ void free_zone_device_folio(struct folio *folio) folio->mapping = NULL; folio->page.pgmap->ops->page_free(folio_page(folio, 0)); - if (folio->page.pgmap->type != MEMORY_DEVICE_PRIVATE && - folio->page.pgmap->type != MEMORY_DEVICE_COHERENT) + if (folio->page.pgmap->type == MEMORY_DEVICE_PRIVATE || + folio->page.pgmap->type == MEMORY_DEVICE_COHERENT) + put_dev_pagemap(folio->page.pgmap); + else if (folio->page.pgmap->type != MEMORY_DEVICE_PCI_P2PDMA) /* * Reset the refcount to 1 to prepare for handing out the page * again. */ folio_set_count(folio, 1); - else - put_dev_pagemap(folio->page.pgmap); } void zone_device_page_init(struct page *page) diff --git a/mm/mm_init.c b/mm/mm_init.c index 3ec0493..b7e1599 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -6,6 +6,7 @@ * Author Mel Gorman * */ +#include "linux/memremap.h" #include #include #include @@ -1014,7 +1015,8 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, * which will set the page count to 1 when allocating the page. */ if (pgmap->type == MEMORY_DEVICE_PRIVATE || - pgmap->type == MEMORY_DEVICE_COHERENT) + pgmap->type == MEMORY_DEVICE_COHERENT || + pgmap->type == MEMORY_DEVICE_PCI_P2PDMA) set_page_count(page, 0); } From patchwork Thu Jun 27 00:54:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713621 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2087.outbound.protection.outlook.com [40.107.220.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 928EA17758 for ; Thu, 27 Jun 2024 00:54:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.87 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449698; cv=fail; b=eDHkMXGTzRJQkHyUxWRcjE5DEE/khVhPi2K4WXNK3Jy5Q7OkjYBsVGrbLU/iZJ0uINE1yOIHP+qXBjRgCwcuXmNRcQjnqWM/lr0cU4eAhMxKKVWyY7kT/zhp5fX0Qn+THuvFCv+GVg0l4BgLVeih+YMGqwhYLaOAtPJJPDAga10= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449698; c=relaxed/simple; bh=yRxDZ6fr1cX2nMt3Ynph/cAhrXXy2Mxm2ZTUB5nLWIc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=cUsTK/NXyu4j8zMCLKL5j/4SW/4RArR6vMNkA9Acwi8MbzgLy9QvsHoQVuPVsiJXdRh4heG7iXT15RRn3OT8I5z02hrHVi29ltZt66jwLn7/HX74rHdh3shdF5M8cnoLYICmQuO1XoSZsLdaaGYoikQ3rrbXBowCBjUO23279oE= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=VS8XlZaS; arc=fail smtp.client-ip=40.107.220.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="VS8XlZaS" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZPDfYzPAFZO4u5U0fZDyPG9SM0ws2UCCHvf5790JCbuCEBgWBvlAaVYcdXPiPxgH/fuZkUMtZLU7wIScmXIKMCgC/1H4wblnLmXMjLIzfG9cQqpVjulK91m85bFvwfsccw9iHNkxVf5vdqFkFIyPNq5Hg6ycV/42FSr30cMhQWzVkKp+x7TbDgSLJMWfZ71pok61y3w2rQJ1ek8bazZ4JuiRxrx+x9sxGpgZzPHk7O6zevkJBFV12RAUCiCeFJLL30CyCp5l/9EoDfyrI4wPW8HM8kvGy7XbpMPPRNiILlOZk8hcu/Y9/EqyU1p3eyILvEW2fsScphhWGlUyEsbnmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mkUV5xCCQA5m1br9rTXYMXBe6uTUnQXYw2g5A0ZnCWs=; b=UHZtML0E7xPH8r4oApQzPKP8F9JSWROAeCUk+Uze6UjBS510khjCgNL/+PDJdFekjFEZtWqpyj05+7NvTuEMD58FEEh4bTeNsV+OmJEKdd+kd41A4RXDT1hrC41pugEDoXEEM8CZCs4eNr2dBiOy0A+vkS5EjBWGov7UOM0WAvDLI+7aAqA6pn5/AJWMB/LdXeyi29QKIueg0jJzCu8+fh81XTJN05nKNF01oisptrJwk4Ej5Bavf27aFpGAJSZ6NfgR4lvdn94UCIEi9AkzzwpUkfcZCrZ0srao/iKf4IYL0jt7/ZR8lc7lFGW62egssA+UBsVW0uXXTv+BMlRYMA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mkUV5xCCQA5m1br9rTXYMXBe6uTUnQXYw2g5A0ZnCWs=; b=VS8XlZaSL4vliXrE93EJBTfkYq47yLhn4gs5eXtHKXZFH1jTJTKBDlkAE4sKSRP0s1UyWD60ysH9GCHx/ieA5kRuEvObXIlS+a0+cwb0KvHe+weZptUlLRReOZ2bo0gncJGecK8mbQ56hLiy5+bS/u6STrDt26Q0ovbVsR5uCMVa/8av13DMgI/oAn7ciKpgzNGGmgq202TzW1N/sKgAs3zYRS7RC8pEIHgl+WyRfbvkEBAo0LJNu4xMfELZoiUAMciuPtGDbU4fCs9B+BkXN1ndxbNvjDfGhbLSZvNE7NUNLCvErTVXt/s1Xakvw+zgYUJNvQzuhJM7V69kPXf8rQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by MW6PR12MB7071.namprd12.prod.outlook.com (2603:10b6:303:238::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:54:53 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:54:50 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 03/13] fs/dax: Refactor wait for dax idle page Date: Thu, 27 Jun 2024 10:54:18 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0185.ausprd01.prod.outlook.com (2603:10c6:10:52::29) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|MW6PR12MB7071:EE_ X-MS-Office365-Filtering-Correlation-Id: e5359390-95cb-49cf-39eb-08dc9643bfa7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: 7cprG3ZHO/n8TBIFj+nmCoFPsYgpW525x6KFdp7aXP1Baxi5Pws2RO++/9nKjb2um6s3JuWGJ6yjhoUp8o3uKjXpT2XOoCMGjiacO1kJ+am7OqjNTNtZxRWcRAz4IMjQYioXDGuPIpUy7hzBUU5VJLb3FMoFWYEpbQWNtDGwLWdXel62gvv7AU7JbEYVIZJlw9uhJCyh4CPriRcUB6Ylgf8g782n2gsdvM13CM03EPmF4C6DTpLS5Wd5o+ooYVVOyh2idaIajD196+sP3VmQZZ8TGfAhQtyWkDqHF39ZWisguw0wiu16ejTGXicOdTQK1jzU1tSgKK/Axlagj5FbCzFY71tW+U4W58l8l3TRFpsMYe6LYzc7/+rDfC9cUgqX7tpk2lWhEThfPAN/oBTiz3AS+I+F0k5DYODNzCmXhY58+5nTbA5n288CONrKfisNewBNf/7S79i5yqTduOFs9940972pQm5yGlpcvhdfaCJxYQJ1iErQEKh1pwhg4WTmTT8DQenSPzpTiyV8+m/bKNb+wmxPrIDTKrWKjmmDyMeGfiHKhf2fB6OURnmNSpfsrIARbxQLJpf78jVEQFa4hPTrAvJaWhsZxG+yZdN/Wt3BEMSkNlQWpftwFxgD2w+UToiycWLRmJBR9vsVPgkSGglKVPtiYhunGOGEOEUn13aAiFQVt9sxpp4pYypEGYPV9MyU9cZ7RDWeUc7MWjGU7x4E3wxe9pTLZ2OOetL/Ezrdb6H+N3M2DZZhIZ61VtuhMhnKDRx89WhlCfje16SiyzEbi/hJLEiQZRa2F1r9jC5SqKyL81I7y+QC1EQlCTOVwgNQLnE90smN45Zo927Amy9PuQM//xbAH8k9zyTYqcJ3CCxbsVdATu6Sbr9ElKcN1zE3Cg82aZD0f+wjAqvQ3cWmo/o/fqHUO62yOUIFnAsrQcFXgqjGpAA8Ac/JdHtq7brg7T3EoGa5cSKfW0+uJnjUGWoTF5kCFBelLCOZ9Tm00pbOZswlk94pB+G1Ic55+nQbU+vgikwkAPoBXD8i5u/yLuN1JFGFTWXC5wFgBQIDAKq4h+7IFkvH6AkcQXCt96XkHsAbAi/cZrfPlTwqD4C38Ktvt1ZGIVUl0iqS7k2vCKC4GFAf1YA+7OBWszfQjisGvbuBcuZeYfzCBxKpm85AIVMKenkJLtZm6Z5hsQ1NxY0C7li+sJZ+TXc7WYDQ9xZzJ2qjE0Z75CfbCcFkHSdSqw8osvpb1E0ZSy9LnR7/NaxVnCizswLyVMmgv9CD8DkeRx+Y5qqu0YSW7dr29187dy/LO9eq/Y1kMBQig/wfCNlN4HbuPV3ACBnFy0Tz6w6KTo3FVL/Fl07IJBsXKA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: F659aibZ3vZcoGSv009Kqz79YlJ4kJnGrHfjaZZ6IkUBtSE7tfdpEQ4FcecOBuNVY8r5OsLqlsEMqThN/gkUhqccxVwIUPuJQMIKkDdHAka99M+JmH0Q22mXykwPmXZyUWHOCZuvESAtvqTdqj6eJq5BN2PFtMRGsxlr0JblqRFgF3NaWUBFRGihKClroHNu74gL0hsg6d9s7+DgSRc9qlLZb3cJ/39gXg2rYWQ2hFryGHOwVlyXaE6z9FY1EREZOPDfTOnnFm+vGO/AZnhayUl/Teoy10yBTD3Bhwt2VGA3o4FTLCoO1JPT6uhhYejNocVoNjxWKVKeIsaVxVHwAD1gQOs1tdVcU0yiOFNio1inaLYjxeKIKn3H+wkx4TJT8yfXG2zKRNeTc3pU5pAGr7gJ7RgBDHKT5U9VHvBMIAgIFKwdK8moEw+X+CgeW47uyUjCzssrB+BH6qnHoCC5rNv8x+oqZFGjWEiG0U6vPP1laWsmQxvkKko7HX2pwZ8ZyodljTg5AL6GGN1ZYfEum6PhxP9wCd6YuhfFWRsWo19hKfTb8XLAw7+fob0W8kTeUlLQxyW3Bc+1DH+xv8YtZLsBFgHZnf+samNDEoorPUPuMVdcTOQaBlDzA0lObtpnP3gkz+3jPMmeJJYZQjwzbHByyG/h/HTPtg52OpjW2fCDxPGFs/KLjfGLPrRSW9SSLsacQOaxFAVTNvHeQMef/564GXt2FoN1MEVTSbYFeRVmmCoMqr7eiXX1atFHncCA2NEglX5cUQD1Bgg6UYsG12UGaygBy+n8LoAJ1V/gmiPuoWOy3LGK1bJP15WAIJki/uk3KyYcjbiCEkS/f/RW2sbzJOaidANR2exiyBGBT4mt1Qi/tPI9rLPdSyGzkR+9DzPeHm+Ziw2up6lILdfBo060iU9n+HGm8lezw0nlfOZf46NWgp5hhUO9k28fHD+nTk5SsYw7sahZ66BWiPEPJqH3kkIwn0J1RvoOfQF4TMnUisz6dIebngXoYEKuegQMJxGeTF8grTFAJuZeY+zg8bxqdy4/gLj+96YckT/0gSStuDwRBXCm9+iOSzxHmMWF/GcToviVD8DvgY2Y/NzoHgtzEhbe+XMg/HyXKYUj5uFde9TCy6aMzgh4RtEuY2kaXE5Jc7d7PtRNCIPCSoAzzRuqf/LqvykZbSfX3VN5WVmWSMJCIAiaY7yXtxxefRJ46FW53lsuaZmTSWt973MsT1mCTN8IMJWmhBrcwXe9ksIpEbXjRFxCltpqe1zzyMFJ/5yk3SBsDHLlFtBerSXhG/85GhbmNzhfeYJT57Co9Mks3IKtliTD31WOd+5z5GHk8yQZ03Bv8ihtmwZRrwEF2kGewZ6SeaE/xob98qKTFVuhiaDrxJWJ9e1Uk+4WvUfOMAvzSJo58B58Ao/lGc2qfB+xjvI47LncMlJfKQiLeYm4z9OZAP4/hv47uZpl6UqnN7eRCHrUJcYjFoOtbtF+V1ZRYc32eIzuIyjYiHGOw/6N/og0oK8v/TZo3DdKXeHdZAu7fg3j52Qe72NLALIAXo25U43k74dKsOmPoE12DI68bkwkoQIrBy5fnXFlH2bN X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: e5359390-95cb-49cf-39eb-08dc9643bfa7 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:54:50.4900 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: F52wrDVx8N1dKAJsJ1pHfPmD9Tg2c1chNjqp+yzOoQI82P4JSm5DaMh/9rVfkMe2875tmXJ1obHbVT06Og9bog== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB7071 A FS DAX page is considered idle when its refcount drops to one. This is currently open-coded in all file systems supporting FS DAX. Move the idle detection to a common function to make future changes easier. Signed-off-by: Alistair Popple Reviewed-by: Jan Kara Reviewed-by: Christoph Hellwig --- fs/ext4/inode.c | 5 +---- fs/fuse/dax.c | 4 +--- fs/xfs/xfs_inode.c | 4 +--- include/linux/dax.h | 8 ++++++++ 4 files changed, 11 insertions(+), 10 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 4bae9cc..4737450 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3844,10 +3844,7 @@ int ext4_break_layouts(struct inode *inode) if (!page) return 0; - error = ___wait_var_event(&page->_refcount, - atomic_read(&page->_refcount) == 1, - TASK_INTERRUPTIBLE, 0, 0, - ext4_wait_dax_page(inode)); + error = dax_wait_page_idle(page, ext4_wait_dax_page, inode); } while (error == 0); return error; diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c index 12ef91d..da50595 100644 --- a/fs/fuse/dax.c +++ b/fs/fuse/dax.c @@ -676,9 +676,7 @@ static int __fuse_dax_break_layouts(struct inode *inode, bool *retry, return 0; *retry = true; - return ___wait_var_event(&page->_refcount, - atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE, - 0, 0, fuse_wait_dax_page(inode)); + return dax_wait_page_idle(page, fuse_wait_dax_page, inode); } /* dmap_end == 0 leads to unmapping of whole file */ diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index f36091e..b5742aa 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -4243,9 +4243,7 @@ xfs_break_dax_layouts( return 0; *retry = true; - return ___wait_var_event(&page->_refcount, - atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE, - 0, 0, xfs_wait_dax_page(inode)); + return dax_wait_page_idle(page, xfs_wait_dax_page, inode); } int diff --git a/include/linux/dax.h b/include/linux/dax.h index 9d3e332..773dfc4 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -213,6 +213,14 @@ int dax_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, int dax_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, const struct iomap_ops *ops); +static inline int dax_wait_page_idle(struct page *page, + void (cb)(struct inode *), + struct inode *inode) +{ + return ___wait_var_event(page, page_ref_count(page) == 1, + TASK_INTERRUPTIBLE, 0, 0, cb(inode)); +} + #if IS_ENABLED(CONFIG_DAX) int dax_read_lock(void); void dax_read_unlock(int id); From patchwork Thu Jun 27 00:54:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713622 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2087.outbound.protection.outlook.com [40.107.220.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 290C51864C for ; Thu, 27 Jun 2024 00:54:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.87 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449700; cv=fail; b=dBhWP81PWYJkzpLUXmuXaNaKmNJpRBfvYp0JjXDjMR4h5UaEnK1X2rlPetvDpItyPKQvfUGk28PThTavldY6ZcagODhDMHQJBv1XJt0vr/gjD4Wb+zXVU4vU+8RnEbM9AOMKwDk4xnW9ztBjHc1gdq16IjDMuu8JgNISh1R1SVk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449700; c=relaxed/simple; bh=nW45EYVNIPVWPNgDLm7Kfe6OYAFUqOEE7mbQN7DBBHo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=f/iJBxzRfoPPlpeKutghZUezv2lUrHaPOCWJZJmMWhGN1IvTyxHqoQbQ7he99j26+nTQ/Bji9NzOU+8Erzj7DyRwUlZqIbPG8aqzWcDAqn1jcFnV9LJKlubZxj3XJHwyjKor4X+LaHkvIbJQf+hjMJMDPEAsOqY03No7D2M2thM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=RlmP65Ek; arc=fail smtp.client-ip=40.107.220.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="RlmP65Ek" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=J8ootAyD+RRQDwLwUyM5oBhoGdunhvskFUj8GZnBdF1Z7pTQ9uaAaAzljuDMhZDme1aTExbO36nqTYWNzoW7t8EGDzuqDkQC9S9CjRrkN/ZpCyHfbpuhgi98p3VJNciJQB06u4/LS0FIqeNN2UMEpD9jpmImEehzNWpTiBDnM09ppMuMMUFglwulH08jP6CKQZq4Fhkge5vw9h7KG9fCO92FnIEN6fljL7LesUkrM/+5qJENkL/pYGWb7KLbFKO+7im06uIR0hJ3QdZrCADBn/OQfbv/bB3q60LUfhtPQg9kV4PZpIDO09vxN7qQxeOt9nAu/O69mWpP7MJHdNP3Bw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RdaMrNdetBipLgCehlxgBb33ehvAEhIIAeY4qsi3ae0=; b=a+1n3uboHk+90sJxlZlkZor8T/XkxkfbGPpHvlhAofQ920yg8eFb7gl49TsQcroORDgrmhsTeZxurU3Y/jmt26VITtL8+T6BtwL4BRmxcj12sBAADq3SWhKzqI8ObWpiV+zWnMqfbNt5Mvro80vQZL704ac9zeltWVk7xfKBDbHZ62KxnbYshTeKVwsl32FDKHuLcADYSHfvq9EnsoRxMslP1FPTJIWlQHUvT+LznXwmnc2sovtETfe+8Yss+uG4PV1eAFEuMVCf/2fClVZscSJETNrX3t9TAiXOmw+7qIXGuFDI+hvXel8IzIV9LKXrrN31eSPD6nn+yI9JTR07zQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RdaMrNdetBipLgCehlxgBb33ehvAEhIIAeY4qsi3ae0=; b=RlmP65Ek1FtwFweaFCkSQpi6VggaOAHiotdxpEFkCL0sWCkKG07A/7I8L7Wm8L27lTW0AYjw/Li9eEBlQdyKyPe+o7U2vz6dqybNJO3DAs4pZouHr10PEk/poL5h0SHXe1h/WWB7c7vexMIqFmZFKa9J4TTrucUcgf5cun9bAYcXqmbkIaHAXujHxUyWngKbNoP/ahgzciRzWbVakow82YC3zPuV+MCGUbAm8nMFiCDd9E56RFw7GkBSnk1Mf2OdsbDs+nGHLgJw2k5PMlcIuThlUP3UOxQUEdGNj1BsU46T/Y91Wz9pOfc4GNWT3Vpi2eOsnjbuFEGZalvs+60hMw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by MW6PR12MB7071.namprd12.prod.outlook.com (2603:10b6:303:238::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:54:54 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:54:54 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 04/13] fs/dax: Add dax_page_free callback Date: Thu, 27 Jun 2024 10:54:19 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0051.ausprd01.prod.outlook.com (2603:10c6:10:e9::20) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|MW6PR12MB7071:EE_ X-MS-Office365-Filtering-Correlation-Id: a501ca68-d6a0-4a5e-d600-08dc9643c1dc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: yhKbID0mpY5O1E0F7jhhF1wN+tTNgF75ty/G7gia7/PQRbzlllDdygoS7abSg20eqhDKeeJAalcGWxLbhQWV/03nHOg6Rj64HOvFDynnm8ZTlF1nOda2+1eM0/6bCg4tpxqZOcMLx4Pmt5LPD4E5bztx4AAThCds4PZlMF1vQ7i3oa0AczZdAxXrbDa5SMALVZRnRHJtvfusFfLjipjr37kY1knTe6ufLzztUrc2DczcBNaSbV1rZJggnY7f/CDRGJYu9dZNA+uaT507+EeA19IjnGVdgGdmedyUX8bY1wnyJroUINb5qa3teIxPT9U91K9Ofyxss9FoR+qOykYVq6m/uN5Dl0B1MKhJAsMyL905uLQs0vpzDEcACZNA3f8IKib8SXbJ/0yNLxHEu4uBIQJD5+vdd1G2CtW9dpbE4XcccPh99dLd3UUGciZvefrVnYlDtZvUEdokrwp1SFNya2kNWXeeMjVham0md0y4IfLxKQry6T8RSeZm4ID6uh778AbK0GyNzOZ9F9Eu4OOWeWOtXJUXzZ212X+zj75ZO/GdQ7oCcEZo+czEjgGhbXaTnT9hQON7NNhHPo0H67XZlFWWzR5wvMG9zhtkwb5RU5gHHOI88+UxwMHLC51fTaDtcodRIuMpGL3Gi/1E+cSyaH+LbNDAPTlIPymnnQsCgN7Qf54dzKMSGr1/1uRTMG16EM/Yt2igGgn1+4/6j5I44xL+1k0K1XwC2nTODXd1GrNsw8Mjb42KRB1KE3AsgwIGDfsmoC3/syllH/xVGE1TG8N3Z9jIPHaw1YRFdkKBFsKhnv/E062opJxgGy6aFEX4XchVQt8WlcklnIPyJ64ebCl+JDLqNljQJ1Zl+JR3BhhXN0Ll6B/baeMmq4BrGNetWuXwrMhRsqVA+4bLVOVov30J21GgguFpSjfNDErjDc0mocJ64ezRkP638/wINDNHiLSZwpjXN9W3YhWJqOem6A4kcFBs6B3BNeeSjkHHSDZrXp3GiOlkg42FnJrZhC4pk9SkMUx48v6oEqw2vG3IEzFnaZ9qBeiJnUnpKiIaOyxk5B4Ff0QxwrIQwCUjlFZBwD57ED9010YFhqg5QVAGbnoSPFbHRlt01alLyjgGIvn45vFdlQZ2MiRG99xWHs8z0RLHY5EXf/oA3mgwFzWzpWN86d2ONOu70ODqQeX3JC3OTzgLr4trosIk3FDtcFjAnF4Sl2eOhQOriqpkWB3Z+F7z3ZJev1nUCjx/VrksVuPKMVv6Q8UPPiydsi7bj7hnWvyMAXaaEKJ00DDOf3IWvmSlmXn1w6qfp2vZ+rHopGaMFN1AnW13aMjkgNcgmmxGEmvAF6cwlSVY+qL6rCXNag== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: hqAKZ2kOGKIRQRgT3v95sa3KVSFkwhYhSkYqTnDmiRAmTyGCR2x14IOexjIyDcqu1udqGqKrATThwp9Blq35yM4gSpgCVTbN2k9RmGxayznt1MfJRtq0rFLS0TyQvPBTRtTezAcsK9HzjlVpcYQjVGYiGjYSysSbB5eh9xFE8pE/xFmr8aHN7Zmw1FKpkJvJjs9LS62yUgwCq6dYgYlVX0309fcFB69BD9uJxaYmgrxgRDPQp7T+z2aQp83FT+OA+myLMtXX4WrFB/5X5lq02OZuh1PrvPMQbzUiaOs6aVVmwMyl0qOxLTuFy9dfll6050QGiA1i89DQcofvxUI8/TYg9Bytv7sKM7Nc8U9quLTgQ6q1hhIFM3Jj8LfK2i9bazW6xF5Uje6uvHH0nWVJZM3ymp0CND48zr0+HgFqaooziX+9PkKELXutAkLfz7/GRJ4VcTp1X4n+eLHCs7laYFsidYW5vvR91yM0bRcLBiKcrtj/XatsAPXjbjC446+M8byhEv9tieELr21wtrKRj1BtddTzRpZ88ckzmnfMxecoMo6nXRbZh21nQlKeDZp9Os5IVnD9WUO9tq+cxK7S3kPi0cqzVqYSqhfG2bPYInMshGXUrhoc/3PZ1te8YjXCG98iC/q+SVmSO7hOHB+bRNE7wu9Q6O+aVh30Lf1VgASJ2TZ5XnKtnZXI5VPmRGvUJwft2b5v/Jfve6BXIHQi4uS7zJ8IjhyJyRWv1C8jMqnTjGqlTnoJBFvAKVu8Ec/VIxRKtRdxwIVjDiPxHxBx3ySfqm7mUGJuyTYA6EhYNFUXJoAu3c3p7gsAZCDPcuz1w6DBp2J1tHd2xvO4Ql2UaVspEJk4RfLjBk0OncMdduB/4ZJXeNxD2GO4CoIeodMlLI0xZ0AgcIGLJk7perX2FuS0T5EqNW+kfsZx847IVt6D3TYAsOabXPo7++xIcWNJ2di/ck2JQ43CYu8EaqvrHl9XXczWFNS9btOUvj3BjiBbj5tW1j8ZSg26a6xBv+pfem3UV/TVm0T9pWk5y4Ol5fuMsc2Sx8crMvEgKztQCNCO1zQb6kIfxHTpZlxc0pE71RaIpGby/xxMxOOPpZKUw/6y58pP46beZfHkE+17Py2M5POY7tnG8m+/hivHjKHQDsY8ehquNey8CNOAbsLZ+R6cV5lN9V9m0w0S2j9P7TzpLBOJH22ZYusCQNXNpPWYPuw3aCOpwJbLQ/2oo0is+F/Ltvo934qGPLSk5ax/EFP1tCQHNLFHrE7VxeHBpu6QBUmGz9mkejphg0xyHXtvqzcsnS9sQTBbPo1Uw8JDJzZdhA2ThLmcB7sgKgwABZN3aC2DY0ytkHwBvfVihp6R7Bd/ethAcEL/RcwCpkIpJwELdS9jK90Lk23a85CM12p4NX4M739U4w2XnSqnSPXjcn/cksclfyAbShd5XeuhW6bmkGkOl1mCxW9oPQyXPwMad/e+3xXkVWsSym+WbUOwYrVPcha02t0aJFh5aBlJdjrxiwNho7qEX3+2ZfiOHqzR2uTXXnEbhgSzBEwhp7TUwLySAcB4nuN25UdLcDFF/ayJ2vAezHLZUClA+wQsOoey X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: a501ca68-d6a0-4a5e-d600-08dc9643c1dc X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:54:54.0440 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: jJ7K/ej0IRlpqr8wppnObRDmF9WRwpr/K80S3Lbu/T8ShxlvDA3eRpdOaq1DBJRkum4N/OI8hI6G0JrRi/6P0w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB7071 When a fs dax page is freed it has to notify filesystems that the page has been unpinned/unmapped and is free. Currently this involves special code in the page free paths to detect a transition of refcount from 2 to 1 and to call some fs dax specific code. A future change will require this to happen when the page refcount drops to zero. In this case we can use the existing pgmap->ops->page_free() callback so wire that up for all devices that support FS DAX (nvdimm and virtio). Signed-off-by: Alistair Popple --- drivers/nvdimm/pmem.c | 1 + fs/dax.c | 6 ++++++ fs/fuse/virtio_fs.c | 5 +++++ include/linux/dax.h | 1 + 4 files changed, 13 insertions(+) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 598fe2e..cafadd0 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -444,6 +444,7 @@ static int pmem_pagemap_memory_failure(struct dev_pagemap *pgmap, static const struct dev_pagemap_ops fsdax_pagemap_ops = { .memory_failure = pmem_pagemap_memory_failure, + .page_free = dax_page_free, }; static int pmem_attach_disk(struct device *dev, diff --git a/fs/dax.c b/fs/dax.c index becb4a6..f93afd7 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -2065,3 +2065,9 @@ int dax_remap_file_range_prep(struct file *file_in, loff_t pos_in, pos_out, len, remap_flags, ops); } EXPORT_SYMBOL_GPL(dax_remap_file_range_prep); + +void dax_page_free(struct page *page) +{ + wake_up_var(page); +} +EXPORT_SYMBOL_GPL(dax_page_free); diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c index 1a52a51..6e90a4b 100644 --- a/fs/fuse/virtio_fs.c +++ b/fs/fuse/virtio_fs.c @@ -909,6 +909,10 @@ static void virtio_fs_cleanup_dax(void *data) DEFINE_FREE(cleanup_dax, struct dax_dev *, if (!IS_ERR_OR_NULL(_T)) virtio_fs_cleanup_dax(_T)) +static const struct dev_pagemap_ops fsdax_pagemap_ops = { + .page_free = dax_page_free, +}; + static int virtio_fs_setup_dax(struct virtio_device *vdev, struct virtio_fs *fs) { struct dax_device *dax_dev __free(cleanup_dax) = NULL; @@ -948,6 +952,7 @@ static int virtio_fs_setup_dax(struct virtio_device *vdev, struct virtio_fs *fs) return -ENOMEM; pgmap->type = MEMORY_DEVICE_FS_DAX; + pgmap->ops = &fsdax_pagemap_ops; /* Ideally we would directly use the PCI BAR resource but * devm_memremap_pages() wants its own copy in pgmap. So diff --git a/include/linux/dax.h b/include/linux/dax.h index 773dfc4..adbafc8 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -213,6 +213,7 @@ int dax_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, int dax_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, const struct iomap_ops *ops); +void dax_page_free(struct page *page); static inline int dax_wait_page_idle(struct page *page, void (cb)(struct inode *), struct inode *inode) From patchwork Thu Jun 27 00:54:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713623 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2076.outbound.protection.outlook.com [40.107.220.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5BF31CD1B for ; Thu, 27 Jun 2024 00:55:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449703; cv=fail; b=BF9vlfJ1pAWa4JnaiW/06TwlGQMCET6YSRfmzN9x1/niTCnUquB7LE3cDHzEKYpyY6DGtIuvMQAy85mgpSqNIgVOIX9xwyhbEQ/oDuqJj4TecVDOM1i5+IzX7/+HvqO+g4kw4qPs6A5WEWFUUH6XCxdFiWAXIyabJZLFX6oXTA8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449703; c=relaxed/simple; bh=lDuSfWUlhgjSc0DqR3/BqME9u3aJfNYh5FNr5DoKrC4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=d1DmlEXN9vH4sSBmMjBMt8vhRAOFuuv4xtOorTZWjCJUZqw+qbE7DZxKfUX32XeGu1T2rl1u449MjzRvRp8yzXyC+D1mNBYlgUl1Hq8k3dLLWz9whyeMz/e0jnvufNHFlKFYMUpUzz8bVBi4bBiYMlETjPHxkx6vuRxjEbBHc8E= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=kOUHuXR2; arc=fail smtp.client-ip=40.107.220.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="kOUHuXR2" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Y/+1im9Wg+1Hj8R5eBNK8Ghn4Zm1iAhxp9RbURrEcsMP4+RBF1l40BlsOmF395e4rz9r30Dun4Ng1PJTr3NQZkUxIoyVojbDuheWHDYiisBuJbfsXwEA6sp3cgSvH4wPzCnU0XGg7a02TY2uWFJIFkTk2pEiXA0/14br45vP98u23N4b49VTOnVR94/vH7au0kpmrvxX1I6uacq0lyDXwiHU59sRbSttJihZiXYcLwEUfd8GlOfxtQVrotxNEvHewG/pTxf9npSgXBEFEoYvrMWk3e0PQCLbDiajnkG94ESUvo5n4vtfRDpdg0mNfq81gtgnWau9Zs0JiAFVYzMiCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rl2/HlMeBklMNcqOSvoLzCBY1HST5wM2+mfyJFbUiaU=; b=VYupiNs2b3HFY7Emjyqi51Bus0msopOJQT4/7fLXaaLQoghV4A3FDqV/7NRg8IpVt6LdvuKDuFSlNcT41hamZX13MW2eVDPbLEa3PlTU2hQpQSxENF4Ic5TOeQbX9s7Z6hYU0ldzY2Q2/7JosGIJzq37+yqEnB5tQP60quzCCoJeobOjDsNorkqtYJaj2kj0WvpDwDzBoBKWsPJHI4xrBbxsUykKxY8xUUBWVWg6201yNsNNjwXh7T2ypJaxmGaU0rvxcobcCurborophAR5vx9XskHt6J7L9fFQND0GV8nGUpQZV2EPEd3C7KFS6Gx8dntFJRaltVBnWkzmLdZgRA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rl2/HlMeBklMNcqOSvoLzCBY1HST5wM2+mfyJFbUiaU=; b=kOUHuXR21vB7ce/V7GuZoaUnCR6Ly3cOMrMN5O+55V5E2ePUCoeYQZHtU9al7B39p8nBcnKZYejwz8vIXQVjpxUfLS6gS/WJFx6Vz4s0XNX0WTuRdPQw1ugoiRCDFV2UChqWe1PbFd4b7I+jcGfr++XMLnbpCb5o5t2EqTbWagIF5dC1FmZM56zTkIryIASqfF73MlVClejuyWK6PLPv6Z/5ge5YSjlAyI8R7wQsvXmofaWax0noBOWKeNfM3OhfNCzs/5y1gCMbXECrABa77E/hrrt+LQGfsAY2rvsOYmSOQYieMikdjEOcwzLw2JNeeonfL6tqPlLuUn8C5rpBYw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by MW6PR12MB7071.namprd12.prod.outlook.com (2603:10b6:303:238::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:54:58 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:54:58 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple , Jason Gunthorpe Subject: [PATCH 05/13] mm: Allow compound zone device pages Date: Thu, 27 Jun 2024 10:54:20 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0046.ausprd01.prod.outlook.com (2603:10c6:10:e9::15) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|MW6PR12MB7071:EE_ X-MS-Office365-Filtering-Correlation-Id: f711c151-0360-4be8-10a1-08dc9643c443 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: I+Ji6DziS2gkSBQKiQfxM5SJwLCreKhG+0QwMpZvAC2RABGzBiRnU7q4c8BvHw/Ksx4GXTqSvm99bfrxKMNCYaBq1D3WGBQMJ61FHarpVDilQKFMZ0ilzGV++PwujpBoS2Fvkp0eWMkaITtHavAC9/87++syCxczWfG2IzX7RRbPKu3riLUy18QOO6FnUhAepKuZrHyvavlx7cDd84v6NTzy1ryPUnCiU9Ern/yEbt4jppIbQhiWYE9dH6yt7+o8dPXHC6i7z5s0XpWRFrT5+9l/d4cGJ6GPid0owrxMaqbCufTQ1Cp7hbZCSxkvlBvrLPEMVXsfd9iuIghdFE+eo9n3B+NMqca8q8eH3Z2iWgEaG8/sGIFIj52ttpvn4hVXdTDJMPQktthkd1C07jLD2otWReGHB/xemh7ifiTfjoACXyTuLPeau2YceaCgwXwPWBw9FODDxGEsZ+3rt+79ZbJYpadGBtFzqNY4T8NwpmPQ78Sivor21NkmphqGp1G97jJqgzbGYbh2wEZSt3lGp/Y8zE5E0IS5cqk9eg6isHJp1caalP/te6uskAa/g0At2Z1ryoOKwv0YNkWqqA8QqYkTyfE6P9tSYfb+jf/gGFnqj/7wviS0Wui3YOLxDlDXuSJaHJe4sEnAq1vUW1908kVoc9u27Tqy4wCVGfhMGD3dHGq+BD5BRFXIZB9jtf3J29Gfc2jDeA9ouRvsZvck5dUdmRpCwfAq8MGZ04/2Beu4DbltfkxInksKgzrcHvoV+pkBfLqSR1iSHv3WW6hp8py3ZJpWOOtF6YFNWipLhRoY4c+laS6NE4ZH7lM4JXo2VxcWWXQZggd0DjQxlqI1/TxP/WFj+x1pJD7kKX84H6nkXGTmlJLDBr4iHRwhzeaqpCud2wx3nyKwzYd+LbIqgjBAReMwr6SmGNuqQOr77nJD2YQftm6Pa+uZU/vJ7SOZ1MUa30SSmCdYzKksB7iEGmuGD7vUWUed0j3nC2kGiTRqcOJ8o5jgmBsffr/j3hzX3kOP8pdq7rJNBXVRKVX2sndFQ0BZb3tCT68gyYT9J/xXMP81gYdP75ZEFr+OREijch5jxLgzVWXYQI1n1r2n4veyde8jH2JvlXl69zsP7lVeVZCDnoH30FRX1UU/cUvRYwHCu0fWNq9bwBLVvIQDqCeV2Snim/AlIebjsQuR2Edsyp81U2ZMRvGkVh6cnKwr7BIyUlBCdm3a48zkagJmwKdNmbqv9qrouqV4dzsCqFNyy92ECKr0s4wIOHik4fpH19k/Nc9Jks6l9LUDTeg2tipSPyMt4Wor2D73wJxPmzX2vmY4VdvvJ6SWO4jZULPGqZRv7FQX3KCuTUJSIKtjCQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: eg8znclncNP70gE2sr1mpDccRz5CdT4JR/6cDRuJxQfgnqNkDBCyE/HCzZgw/D8xQEEBxmQkrwnGWiSGbcHGkP49lPpLn4zw7+U4hn7aGs8RINWsie1kd468qNBXmt4IulTYb176mDV8vbYayQJ6OSULS6wEBYqDCrOHSr+rrsnh0uKFk2JATIVOBJwbNsIRrXReLfb0hyZ/UMSUGXAAKYnr/4EOTApxnzmLsPpxhYBuNtBiKUj9oHCzMzHY6CK8Iq1wXmGV5GrZUzw1sRQSrTWuD8xo57FOFKfVJzvqqRuIKjFKppP5htNPbAQ61q0fudAWdqTxFVjOlO/Znfc9ucHNmjEg4clzGl4Ecr9GgXYB0Q7wZqtiHWkURhnl92M/GTm6Wie9HLg7zQmgzVijUKfjw9s//oJzL/DS9q00GGbmHZpvjOwNcx+K7OedcuyDBiqg/zlxRC89rG/+kRRy2ZXYQLuGT3XlTxmqDBQKX7N/M6I1DmOyqdS7rU9WZFODjxfezqQsPa+ODATdIdSvxHm73z3Ja5Ctc49Zz8MpqphLKIhtACoFZK4kDpaoj0kOvjFObWVZEB7BYkP5huWrInZ1kaiMTjKhhjA+Yvk+V/gOXjr1T9Nt48gh8bOrl6Bn5kEguB3IDMIVn2i3FYbSgt7ebSV87VEEMsPs4ZhbY4GTarxWRVCUSeje7ZoBuhcWhK2Sngha0D+s5M7OpvfsHwuoPJZX2Z7JUqnd+hlZ6wc+rlJye+lOdMAHwix8W1AjWWeA01fkijw+XaPgHLrXmsI2cPyExA1Kt2z8BO0kIiDC5lQlA7zy98bjw/HbwjCWCMvAToKxxpYXEntzf0GaiMGpMz3tsLSEnGZuRmcVvMNA4RpGervMh7EjhJsF5JpwcINNwkp2cRoHSLJRFLqyFqhltaoLG8NE9Mfvi2AmOxBwkjX/kGkJgu7GN5QGeBaSxVOfQTIH1SYNdXf08p2w3kwcxQdss9cOMMZSpbty5wtSWfZUGPSLpJ4Vp1sno0H7exCI4D8GpILYrbemqdY+ByLRgUPj2X7sr/NYFsti86/eZzHSr22tre+HtJsACJSMpqxet/+DHa/O4x4HS+ZRweY3U4nwYhSZSq9JVTqmisMmqmQc/2vxyIptj9OfEaOccG9FCgkK3wgucqbIP4nTHcRu7lNwytH1fSNnZkSZrXA8sdrh/etJT5/yiyAHb/INvjcTckpJqnCS1pszbZUdazWaZLFB9nd8C2GbY99ibH5KbFnIuAK5h6UV6RoD3uPHFiqz/TSRlOfkmuUMkJHS8d9pmOCxLPFgikFYBT0jrshK6KHxvbx/mSeADY6qphgrI7lYDvAZR/UbnKfVQay4Sz2uFJVRXSgD30OTQnoP+zd1alTBQRIS+0oPXba7Cd/yQdi59e+w5kIUlvc1U4RVW+r4iMJFcEi7khu/c+F0QuAeWzuz7t8CZ4HVa31fiJCwMdK/DuIEBzesKcPv5qY9hClXlQWHulo4UjYw6kTS14syFEJO/XFQbFQYJj1HJD59o/fAZacTGthbxKKyiHYL45P02JDtOVt9/N5fIgNSV8g7xOeVG6Mck64Ye7V3lJbV X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: f711c151-0360-4be8-10a1-08dc9643c443 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:54:58.1109 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: tbwm5sX7q/04XauDJ1H5ddMRfz6qo7yI/87yFCY3o44qcGCzXLX2dDCYOwcdXdzP4mFGU1u0mvsUFn6L9nbGQg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB7071 Zone device pages are used to represent various type of device memory managed by device drivers. Currently compound zone device pages are not supported. This is because MEMORY_DEVICE_FS_DAX pages are the only user of higher order zone device pages and have their own page reference counting. A future change will unify FS DAX reference counting with normal page reference counting rules and remove the special FS DAX reference counting. Supporting that requires compound zone device pages. Supporting compound zone device pages requires compound_head() to distinguish between head and tail pages whilst still preserving the special struct page fields that are specific to zone device pages. A tail page is distinguished by having bit zero being set in page->compound_head, with the remaining bits pointing to the head page. For zone device pages page->compound_head is shared with page->pgmap. The page->pgmap field is common to all pages within a memory section. Therefore pgmap is the same for both head and tail pages and we can use the same scheme to distinguish tail pages. To obtain the pgmap for a tail page a new accessor is introduced to fetch it from compound_head. Signed-off-by: Alistair Popple Reviewed-by: Jason Gunthorpe --- In response to the RFC Matthew Wilcox pointed out that we could move the pgmap field to the folio. Morally I think that's where pgmap belongs, so I it's a good idea that I just haven't had a change to implement yet. I suspect there will be at least a v2 of this series though so will probably do it then. --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +- drivers/pci/p2pdma.c | 2 +- include/linux/memremap.h | 12 +++++++++--- include/linux/migrate.h | 2 +- lib/test_hmm.c | 2 +- mm/hmm.c | 2 +- mm/memory.c | 2 +- mm/memremap.c | 8 ++++---- mm/migrate_device.c | 4 ++-- 9 files changed, 21 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 6fb65b0..18d74a7 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -88,7 +88,7 @@ struct nouveau_dmem { static struct nouveau_dmem_chunk *nouveau_page_to_chunk(struct page *page) { - return container_of(page->pgmap, struct nouveau_dmem_chunk, pagemap); + return container_of(page_dev_pagemap(page), struct nouveau_dmem_chunk, pagemap); } static struct nouveau_drm *page_to_drm(struct page *page) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 1e9ea32..d9b422a 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -195,7 +195,7 @@ static const struct attribute_group p2pmem_group = { static void p2pdma_page_free(struct page *page) { - struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page->pgmap); + struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page_dev_pagemap(page)); /* safe to dereference while a reference is held to the percpu ref */ struct pci_p2pdma *p2pdma = rcu_dereference_protected(pgmap->provider->p2pdma, 1); diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 3f7143a..6505713 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -140,6 +140,12 @@ struct dev_pagemap { }; }; +static inline struct dev_pagemap *page_dev_pagemap(const struct page *page) +{ + WARN_ON(!is_zone_device_page(page)); + return compound_head(page)->pgmap; +} + static inline bool pgmap_has_memory_failure(struct dev_pagemap *pgmap) { return pgmap->ops && pgmap->ops->memory_failure; @@ -161,7 +167,7 @@ static inline bool is_device_private_page(const struct page *page) { return IS_ENABLED(CONFIG_DEVICE_PRIVATE) && is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PRIVATE; + page_dev_pagemap(page)->type == MEMORY_DEVICE_PRIVATE; } static inline bool folio_is_device_private(const struct folio *folio) @@ -173,13 +179,13 @@ static inline bool is_pci_p2pdma_page(const struct page *page) { return IS_ENABLED(CONFIG_PCI_P2PDMA) && is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; + page_dev_pagemap(page)->type == MEMORY_DEVICE_PCI_P2PDMA; } static inline bool is_device_coherent_page(const struct page *page) { return is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_COHERENT; + page_dev_pagemap(page)->type == MEMORY_DEVICE_COHERENT; } static inline bool folio_is_device_coherent(const struct folio *folio) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 2ce13e8..e31acc0 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -200,7 +200,7 @@ struct migrate_vma { unsigned long end; /* - * Set to the owner value also stored in page->pgmap->owner for + * Set to the owner value also stored in page_dev_pagemap(page)->owner for * migrating out of device private memory. The flags also need to * be set to MIGRATE_VMA_SELECT_DEVICE_PRIVATE. * The caller should always set this field when using mmu notifier diff --git a/lib/test_hmm.c b/lib/test_hmm.c index b823ba7..a02d709 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -195,7 +195,7 @@ static int dmirror_fops_release(struct inode *inode, struct file *filp) static struct dmirror_chunk *dmirror_page_to_chunk(struct page *page) { - return container_of(page->pgmap, struct dmirror_chunk, pagemap); + return container_of(page_dev_pagemap(page), struct dmirror_chunk, pagemap); } static struct dmirror_device *dmirror_page_to_device(struct page *page) diff --git a/mm/hmm.c b/mm/hmm.c index 93aebd9..26e1905 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -248,7 +248,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, * just report the PFN. */ if (is_device_private_entry(entry) && - pfn_swap_entry_to_page(entry)->pgmap->owner == + page_dev_pagemap(pfn_swap_entry_to_page(entry))->owner == range->dev_private_owner) { cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) diff --git a/mm/memory.c b/mm/memory.c index 25a77c4..ce48a05 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3994,7 +3994,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ get_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); - ret = vmf->page->pgmap->ops->migrate_to_ram(vmf); + ret = page_dev_pagemap(vmf->page)->ops->migrate_to_ram(vmf); put_page(vmf->page); } else if (is_hwpoison_entry(entry)) { ret = VM_FAULT_HWPOISON; diff --git a/mm/memremap.c b/mm/memremap.c index caccbd8..13c1d5b 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -458,8 +458,8 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap); void free_zone_device_folio(struct folio *folio) { - if (WARN_ON_ONCE(!folio->page.pgmap->ops || - !folio->page.pgmap->ops->page_free)) + if (WARN_ON_ONCE(!page_dev_pagemap(&folio->page)->ops || + !page_dev_pagemap(&folio->page)->ops->page_free)) return; mem_cgroup_uncharge(folio); @@ -486,7 +486,7 @@ void free_zone_device_folio(struct folio *folio) * to clear folio->mapping. */ folio->mapping = NULL; - folio->page.pgmap->ops->page_free(folio_page(folio, 0)); + page_dev_pagemap(&folio->page)->ops->page_free(folio_page(folio, 0)); if (folio->page.pgmap->type == MEMORY_DEVICE_PRIVATE || folio->page.pgmap->type == MEMORY_DEVICE_COHERENT) @@ -505,7 +505,7 @@ void zone_device_page_init(struct page *page) * Drivers shouldn't be allocating pages after calling * memunmap_pages(). */ - WARN_ON_ONCE(!percpu_ref_tryget_live(&page->pgmap->ref)); + WARN_ON_ONCE(!percpu_ref_tryget_live(&page_dev_pagemap(page)->ref)); set_page_count(page, 1); lock_page(page); } diff --git a/mm/migrate_device.c b/mm/migrate_device.c index aecc719..4fdd8fa 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -135,7 +135,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, page = pfn_swap_entry_to_page(entry); if (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || - page->pgmap->owner != migrate->pgmap_owner) + page_dev_pagemap(page)->owner != migrate->pgmap_owner) goto next; mpfn = migrate_pfn(page_to_pfn(page)) | @@ -156,7 +156,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, goto next; else if (page && is_device_coherent_page(page) && (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_COHERENT) || - page->pgmap->owner != migrate->pgmap_owner)) + page_dev_pagemap(page)->owner != migrate->pgmap_owner)) goto next; mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE; mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0; From patchwork Thu Jun 27 00:54:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713624 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2066.outbound.protection.outlook.com [40.107.95.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67F428F47 for ; Thu, 27 Jun 2024 00:55:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.66 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449711; cv=fail; b=hT0KqNrJ05/r4HxXr4KNZ/v8xEOyzVuf+aUq/iS+a2h2OKvwjxLupRp4j3yJV5gtzyDhZbyO4+VJgZeXhPXEn7WfOwMgHaUrvb30W/utO4C4WINX9BErbeF9V0AvuTh4uGPWaP5a0JLxaj/MtFBayIxRVO6gx4rEfJ6Ul5g6/V8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449711; c=relaxed/simple; bh=yiYk4ZVhdGpw6I2jrZ29ITRgyICdMhOWkgra/OxKjYU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=Tx6XZca7ho8dfcvo1UHL3bbI/ySPEZagghKGdWvqi3WIsm5z6BYcHNMJox57N9R1ofPjcX4yFEwavlX3qN9R8171BCVmZsLDgPqm9F1zrj/qGi2FcbkWcpvOWu5HMvKkav01U1z4pd2lhTrkOpIKYPXKzONCCjTRMehZEPVJd7I= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=lUrLIacI; arc=fail smtp.client-ip=40.107.95.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="lUrLIacI" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PRQ9UMZVvQJlGA1JyILEsfYbqGIwEwcdKAXQOgmUNiWA7P5AQt9oPzwXy5HujfzfKUarlk6coiYV6cLGmxLazadSEJvpmn+pqf4QJEt/8/pXTfg68uCcISI/yL5E1YsF3vaybffvplTJy8EkA4JJ+CO8fkS7AsToXc9Omzyh3N2qRvY0Yi6wZZH/Lsc2rjUHuDL/9Nkaj5doFnP1AG1wWP9RxejgaG3vhkkVgKt1NmJHAQvTOozBRD3IJusfadKlHe3PtjQ6pniG+jKVOgXOMhWzAZlRFUCqUT+UuLFqf0R2Tincz34zt5iKMKaoBF0FogEiG3tZnWuA7fYlDse6rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LwTuajmqH/Dz9iq8vwzSLICD/nvk7k2MSnhgTVOPjdk=; b=SotSGtBw5bAhGNtC1Jzioabd3HF5PsxXFPJbKEL9RoX+3N0+VsGo7IL8Qe7r2aWxAp3ObpOBxcxIL0iPbCLTDP4CDRR2k5jSmAgX8xjxFb4uzAUp9ntweB/7NhtLO/ozD5oGBBZfY7OiX51JXc0tI624Kx1jVgpWxd4S9mYPG+ep2QS1FdGSQ65tHdRlPFLhiDSKH9ajsxxBku29Kjr4H16fJQjX14676H6p68anhXCpufLQyI/tMmZ1sfijJa43BfAF98kwqCLJTDNskUs5oNOioroeBFIpAMGZ6CjDdS73R4N/0ZqMC+9n0FxJW88TNDgGHipvP8uvyVHMtdi0Fw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LwTuajmqH/Dz9iq8vwzSLICD/nvk7k2MSnhgTVOPjdk=; b=lUrLIacIPI/ywe/gFyfRXydzEzVgb5vC+zFqv2bWT+0XpKyMlLqNiNEv8jyYDHVVFpF4V/N/MtcjeNh9Q/J17rsPMvSZCXsYMYLr0MwSv+XAk1xHRiU5WYC9vk8oYm9j1717bX6nCSKxYYKchAHIGzNGMljvpVEnnHTf9TZFQj88jGCTTOapOle7GsFWHTg32kyjJMdx3j7Fji3QfePoiU5dxwU9JPS4oEwgB4Ja2uy+pUUDae1moUe70tvba9NR/QGZ90Klvv/TIeWw05lYZkOVR2zR6dw99KPgl4ZjNmZAFzMY6IMzhfPHOiJ3Di2h4JCOjhxun4qKotOwseAp3A== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by MW6PR12MB7071.namprd12.prod.outlook.com (2603:10b6:303:238::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:55:04 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:55:04 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 06/13] mm/memory: Add dax_insert_pfn Date: Thu, 27 Jun 2024 10:54:21 +1000 Message-ID: <50013c1ee52b5bb1213571bff66780568455f54c.1719386613.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY5P282CA0090.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:201::17) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|MW6PR12MB7071:EE_ X-MS-Office365-Filtering-Correlation-Id: f829664a-b39b-46d3-4f65-08dc9643c831 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: lYXynZNeE0+XENjHJM0WN4qEdMZjjd6nLh0DgkAMDNpJSxN8LmPNHzTyQ0qYPIm9wx2aC06AlJYDXpYQ4qX4Ry0J+4AkQovojUMHpvdWESfdk48hRdEHNks6iI95PGkEHYkI4+/Q3nhyuao5pv1NwqOAUfHZSY+Q0XI2r1iDzRtS7CviPVwgjuWQVT0Syb05lkpi4jOhoUJkuauY4xmeRqeNpojheVzruW3NLmBo34szDisfF25nzo+s5j9o+H3g/NC8gb0coC/txPJ0x3MlbAx8xtvqLUQQP/J4q4NIy7AYUrXUiDZ1AL5M5kxRApYMvFVTbyz+c4O/SLF2U3FeU8K2CurPPMqJi5PU9mvfkluoBLzVrCSJrPJCwOEzT7yr/e5P9kp48EfKYe9cxsWZomvZZXmAwgvgkTHZNShcLgF9MPeaqR4L4qpRnS+nkYD9OaJrwjv7PWH+1E9V5RxjKYrOdFa4xwKAJscu4t0xyQQdNVSYynvm+2TKum24alwQewLK8xACj29y1ublCfTYBZ3NabhofNPs8021qR3+Lc4qFD6uVpDuzO+pMWvaytkSlJ64PutcyQhxP/Yiqz6JmbLbYKGAXpFTTjIano+35mundFPP/M9AMgNJaotCYWgJJQa3rLv0EgzL+ip7OEz3+KVBCHFOK+6TMkcwcEUYi5BQSNS9XslcXGTeVShovqfZ64bXogVe48bJUFd1Za/sKldN+QrcmtX3HjfoKZivSqZ0CeW3FRBImdy4c+z6NjGV/CXBqkn8DzMiwyVd/z3KKFG4K9dCFJv8uWt6/NXvECp3YZG5n8rzOTfT4zxrSzlFViiAcAnGFZ56QeuKrnK/bm4f1LURHPFbmlR9v2+wMMNT51gRmGw2JG/R53rUwMQdCuhwHofJx6pgsu+w07CznWWz4uG2+11TIDhIeAAx462U9cvXFcO8PErK3R+MedmhDxqw8ZUfMkVXCO8HGU1Q4IDFUeO7OolWYx5asTi0eIyvuHEbUwp2tQ4XMfxnF/mAn3B3mBs3y+GGVV3ba+RWR3Pe1lcxibb34ibnkphVNp/x2Drvu0nhFszz7Ksx4EFA7N6+4ksSsuN/edZwu0WfJaoGVibWUyWwEMUNeItsSShjKOzf1szjyrvgFFh71fBgN/kFCbFUNYVr3Aknfo/XQdfrxUX9bosPaXLY78ltdVm/YeoeOeZ6Jj2CheB7ci4lwVqRul42B5kyhBAQgQXulOccVUWqm0b1rXjNG2Pc4HqQAbtF0JScDW7Rl/6xCIZpJxHHxuAkfiw6mbuvwqd9+sjr3ywY6sy5116nb8W71evlxZo8qa3vNS3br4WTpvEX8gmYzdf97+ZvOYxAHwfZGw== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: c49ryrsnGaEKkZ0paxxpf8ffnzOPK3NhW6sv9Z9P/3NjUpF8jPcWaewk2EMCbJKkmI4Lo9aTM5kPWJllBB0yaYEVZ7eXRHq7CfBSqfdYqoKk8EjAAurW0fpr3mzs48tY+Do7b4td9ae5z24KRN2EE6L9Z5sUwGG0keGYsCEXk3rdx5FP25lBPSAmJNqs5foW+atTN8PuQeSfYjZAXoaIGZLjWEmNWjkjSG7NTmNd4249hvPZKm4P1IuxG/3ENWj29PAY7gYqVX/jaM/EIZJz59f2K1ECnbHhluuCc6XkBTOeXbTYqo3L9GKH6iJIWMNxiqU3boQKigjJxQ3Tt/Ln84k3g550uhSwcYV7YQ0o4arjqnT+s1xiErJPM9ub71r176CPAy1mzfpH1I9UVzQCAM6TcWdV2scrw3GrpvWYGX2yNzk3zjA3Ofc6DF5E5vmpoz5FlTIvS5bhawMW9vOUkLm8i2bJoia6I20UPKcYRLRDsBXemmIkHvYXiJD1HmjVEfgiNAc5SWmR4ZxhXNIvBfTmvzkyAkOVLoGXclj1g6iu6KF17YtbhuSB0Zb4Q+87O1jSMTwrOs8Yk0Q3qW/EdaqPK9XjymTlxNIHc6HK5L+X6ouPfdCR1jJpykJ+r7q0XWvYE5E9hV4dqncixJMwDiz8HwJ9BLhsnR54vBHfaNgGAxElfQuI/+7MxvuJMnp4BntCGq0yBvC2h6pgy3jWpI9I0GWVnTbTE2Z/7PylfP70TLu5NfpyvL0gFRLeFiCxeRBQEPmDEOHgCgGBq9Zf2p90YHiwvpPi4HZeiu2w/y3tGbM5ew40WyoJsiDnavdb9VBlPaVX4P1ZE5cMhAgSfQcq5wd7q2pvKMrI3cGLvIhCmI0pkPrXEdZySeHV5eEEM72GK+J/bozOrj7dXJYvZqLHlxU7NYW646DFfviZ7I4RM8UBaBuRagksg7SDOwNy+RgfBDpp7esWbsSDu7y1W7aKiW1nVKSgDay9S820p6FORmaF9WEUnauRZilVOyl/iVkiK3nIxohSPGlMD2IM5a4d7yWl10sLQ2yXkGKUid7HySJXAidXf9ErQsuyryMPZWXRgdEm6ObzOudLTTvz93A8R2DnPFFY4nSLz6gTTIQ7WvlUZniqngsrbouQNZNvBM1ALPFj9Piv1DP/n5yelLFqd7o5+hx2zCDBBIDnCNPI7yPfPC8lZDuVfZkQE5Dfk1yOkSbw+lJwf96BDu9VuQjeP6cRL42Wx8TvESanzzVtj1W0YwAHDHnMm5tP1XS5raDJyYSY9zw6VuANTsYI9bcg7Xbbsuig8PKMDi90F6vSCz9pszOqX+u1rdirc8feRelHbHn6bQ1poL/RpZ1qizRoom0xo9Pwlt7NpoMX+DopxM35juBU38KNIcSvkeblPpwsjVakYztjLrR2Vg7bmmU10qG6xsWgTln6xnr/sSaj0rdNiBjn3X5Ej/AqDlowSfHzv65BOkIC5p58du503QpbkUH4UJH7EikznB9Iw1qSQTAxVVcwnrrMOsh98Orp7f0remHuNP30BOEz56U776vf8quyUGzvdO7tTDnoyLxIni5Nsv8IPTrS9z5GOWPm X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: f829664a-b39b-46d3-4f65-08dc9643c831 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:55:04.8153 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: VP+IB+RrKOqGr43kzXBieunOvUrssGRIP9nq84UcCbyyH4uUIkcIvvcuTsr07JP1Uuf/DAVI/7xcJTTj2DABTA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB7071 Currently to map a DAX page the DAX driver calls vmf_insert_pfn. This creates a special devmap PTE entry for the pfn but does not take a reference on the underlying struct page for the mapping. This is because DAX page refcounts are treated specially, as indicated by the presence of a devmap entry. To allow DAX page refcounts to be managed the same as normal page refcounts introduce dax_insert_pfn. This will take a reference on the underlying page much the same as vmf_insert_page, except it also permits upgrading an existing mapping to be writable if requested/possible. Signed-off-by: Alistair Popple --- include/linux/mm.h | 4 ++- mm/memory.c | 79 ++++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 76 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9a5652c..b84368b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1080,6 +1080,8 @@ int vma_is_stack_for_current(struct vm_area_struct *vma); struct mmu_gather; struct inode; +extern void prep_compound_page(struct page *page, unsigned int order); + /* * compound_order() can be called without holding a reference, which means * that niceties like page_folio() don't work. These callers should be @@ -3624,6 +3626,8 @@ int vm_map_pages(struct vm_area_struct *vma, struct page **pages, unsigned long num); int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages, unsigned long num); +vm_fault_t dax_insert_pfn(struct vm_area_struct *vma, + unsigned long addr, pfn_t pfn, bool write); vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, diff --git a/mm/memory.c b/mm/memory.c index ce48a05..4f26a1f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1989,14 +1989,42 @@ static int validate_page_before_insert(struct page *page) } static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte, - unsigned long addr, struct page *page, pgprot_t prot) + unsigned long addr, struct page *page, pgprot_t prot, bool mkwrite) { struct folio *folio = page_folio(page); + pte_t entry = ptep_get(pte); - if (!pte_none(ptep_get(pte))) + if (!pte_none(entry)) { + if (mkwrite) { + /* + * For read faults on private mappings the PFN passed + * in may not match the PFN we have mapped if the + * mapped PFN is a writeable COW page. In the mkwrite + * case we are creating a writable PTE for a shared + * mapping and we expect the PFNs to match. If they + * don't match, we are likely racing with block + * allocation and mapping invalidation so just skip the + * update. + */ + if (pte_pfn(entry) != page_to_pfn(page)) { + WARN_ON_ONCE(!is_zero_pfn(pte_pfn(entry))); + return -EFAULT; + } + entry = maybe_mkwrite(entry, vma); + entry = pte_mkyoung(entry); + if (ptep_set_access_flags(vma, addr, pte, entry, 1)) + update_mmu_cache(vma, addr, pte); + return 0; + } return -EBUSY; + } + /* Ok, finally just insert the thing.. */ folio_get(folio); + if (mkwrite) + entry = maybe_mkwrite(mk_pte(page, prot), vma); + else + entry = mk_pte(page, prot); inc_mm_counter(vma->vm_mm, mm_counter_file(folio)); folio_add_file_rmap_pte(folio, page, vma); set_pte_at(vma->vm_mm, addr, pte, mk_pte(page, prot)); @@ -2011,7 +2039,7 @@ static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte, * pages reserved for the old functions anyway. */ static int insert_page(struct vm_area_struct *vma, unsigned long addr, - struct page *page, pgprot_t prot) + struct page *page, pgprot_t prot, bool mkwrite) { int retval; pte_t *pte; @@ -2024,7 +2052,7 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr, pte = get_locked_pte(vma->vm_mm, addr, &ptl); if (!pte) goto out; - retval = insert_page_into_pte_locked(vma, pte, addr, page, prot); + retval = insert_page_into_pte_locked(vma, pte, addr, page, prot, mkwrite); pte_unmap_unlock(pte, ptl); out: return retval; @@ -2040,7 +2068,7 @@ static int insert_page_in_batch_locked(struct vm_area_struct *vma, pte_t *pte, err = validate_page_before_insert(page); if (err) return err; - return insert_page_into_pte_locked(vma, pte, addr, page, prot); + return insert_page_into_pte_locked(vma, pte, addr, page, prot, false); } /* insert_pages() amortizes the cost of spinlock operations @@ -2177,7 +2205,7 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr, BUG_ON(vma->vm_flags & VM_PFNMAP); vm_flags_set(vma, VM_MIXEDMAP); } - return insert_page(vma, addr, page, vma->vm_page_prot); + return insert_page(vma, addr, page, vma->vm_page_prot, false); } EXPORT_SYMBOL(vm_insert_page); @@ -2451,7 +2479,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, * result in pfn_t_has_page() == false. */ page = pfn_to_page(pfn_t_to_pfn(pfn)); - err = insert_page(vma, addr, page, pgprot); + err = insert_page(vma, addr, page, pgprot, mkwrite); } else { return insert_pfn(vma, addr, pfn, pgprot, mkwrite); } @@ -2464,6 +2492,43 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, return VM_FAULT_NOPAGE; } +vm_fault_t dax_insert_pfn(struct vm_area_struct *vma, + unsigned long addr, pfn_t pfn_t, bool write) +{ + pgprot_t pgprot = vma->vm_page_prot; + unsigned long pfn = pfn_t_to_pfn(pfn_t); + struct page *page = pfn_to_page(pfn); + int err; + + if (addr < vma->vm_start || addr >= vma->vm_end) + return VM_FAULT_SIGBUS; + + track_pfn_insert(vma, &pgprot, pfn_t); + + if (!pfn_modify_allowed(pfn, pgprot)) + return VM_FAULT_SIGBUS; + + /* + * We refcount the page normally so make sure pfn_valid is true. + */ + if (!pfn_t_valid(pfn_t)) + return VM_FAULT_SIGBUS; + + WARN_ON_ONCE(pfn_t_devmap(pfn_t)); + + if (WARN_ON(is_zero_pfn(pfn) && write)) + return VM_FAULT_SIGBUS; + + err = insert_page(vma, addr, page, pgprot, write); + if (err == -ENOMEM) + return VM_FAULT_OOM; + if (err < 0 && err != -EBUSY) + return VM_FAULT_SIGBUS; + + return VM_FAULT_NOPAGE; +} +EXPORT_SYMBOL_GPL(dax_insert_pfn); + vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn) { From patchwork Thu Jun 27 00:54:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713625 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2089.outbound.protection.outlook.com [40.107.220.89]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 079FDDF6C for ; Thu, 27 Jun 2024 00:55:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.89 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449716; cv=fail; b=harkPptZlIpk4uDsjEb6vOkf1J3LplQsIkaYfUJNPTadqe6pT3yxVi00ACP5Ec2/WOb3Gkb3erzyZMtb+TyXPoaleSTnfP8rL4SFcPO4HpqEVVB72rmhPL4+dhhMTpQXbnKEI/2pXgBlPQSTTMukHo4eePlu19YWsFOQ3BGxxNg= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449716; c=relaxed/simple; bh=WP9ph9eAke3h3Vx/GZVD/A5z0hYLMdiBCyuSnftEsM4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=NlyAkssVHb3EUXkuqmEf6WzdM3Bu+Ozw6DBtQu4wR/DTpIltR6rF+uBnw/p9elg8AiINExvWY5QZgorCJtswtF78j8sCIHCp3UG/qkh98SeaVUE1mnQeHS6/bgH+1XkoYDkMcU/OhIAEFeOvokD1O0dWHlKCArkx/raL8KUSOZw= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=eiXixoTO; arc=fail smtp.client-ip=40.107.220.89 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="eiXixoTO" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SNcUpF88Y4rEAdYcsUr5Ib9KGWajskSOzBPzKlQidlJAyE425/ZuA7ivVpjxBU5rssvgjc4KnFlTiWWilc+7v4uA0AslTiTHid9mDVokDEByj/e3ICrHmLOwFWxy0QvxJgjQoH3yMgc1+v6IWlhcsHNsn6JZIyz9HhP8qpgbnSMrYmRcoVcmJEDf5RH1g+02nrLaeSLal29v8tVaw2HTLZdQC0F/Z+oaUiLqepA/GwmDKVlh+Md0uAEfu8VjQdNiMieTQY1vkbDpNDIc88NhWlGSPkLS7lRduJD2aVYc0S74xdxcGQ6g5l1Bj4A8r+ukJo5EN4vfZ5HSMwFThrFf1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=y4HSHp37qCtA/VWcs3e6LcRoHar3qq9hdLZVRN2dDn0=; b=PebZMnc78+hTnDwnP7KzkSYmQRD6Vv0ag6sRha1e26PXwbdvS7SsLMClhZOkiSGMCHz9Y5xpvH1LTINiQ7PfFuMbHDnuBcFdOMz+/YA8C1OrWD92SPTWKlm0BTL1TY1mH6r0xGb3EV67jrH6uXp6NOXwDB4RKn8Hhbtmt4QysVRRRWRI4QvC44GVS0fhDyLthmomJBsx3r5ZWIrthOKE+Tdv8FHBY3J7xUOcasWDz7K0zbJVcYzwJcZGPDH6e3iMTDfCMYi5wl30vZx9/VaKvetTPYIpVnhrLHnKA+oy5oJ0x9l0VayymTl1QW79khnOyJdd7mplf0+KgGHPpz1TNQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y4HSHp37qCtA/VWcs3e6LcRoHar3qq9hdLZVRN2dDn0=; b=eiXixoTOxfrJyf+EpoEiyPUuwDFOoY9CxumkHnCC/k661PJW8cbLDdghgXgOEJxQZ6pW6Dg0ZcOGwvaZkw6xJKrjxz5qtKuNKI5IgZOty0c8WqySLl/ww9dQvLW9fhzVdIHSdAq3jXqxvm38NocuNalIvvrQa9B9mWfuyt/kCHk7+x0cd1o0ASAobdL1gRWyQTxLGf1xEs1oTZCoSM6Yt5dBb364inHi7byzESFPlCZ0lWtNT5rRjSqjYSZCsS2HGNrPH7AaMbinAs1EFlR1BYByNCilCL6i91Ssz1D0W3m9hBI1fmp4oNAQkv3Sm2oaZgDN+kLPDyw3juGm4i5mAQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by MW6PR12MB7071.namprd12.prod.outlook.com (2603:10b6:303:238::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:55:10 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:55:10 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 07/13] huge_memory: Allow mappings of PUD sized pages Date: Thu, 27 Jun 2024 10:54:22 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY8P300CA0020.AUSP300.PROD.OUTLOOK.COM (2603:10c6:10:29d::23) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|MW6PR12MB7071:EE_ X-MS-Office365-Filtering-Correlation-Id: 98d895e4-4e19-4e64-caa8-08dc9643cb5d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: /XEw5kcci8RbCOPgi6/1Mvc5Y+zu3BWglvtZhovBvLxdtQRFmQR4vmhXSe+lE9VoqkOZIj1uG+qHLye/a1nDN6YKloB1AeXshqejjDQ8Q4LoLO/NqzwQl1JdW9VlIM6gEnTwSim19NeyMI0yJPPqzUStM56xdc2OEghfdrMl4a2GvEKSGkRHHYmezUqa78O9hJiiQXv9YGz6IKNJJwb7VPyxOz9/SgZCY14rt3KgVtGVpIUq9rFA4C6oiwn8LD06xouB/4DohStism9K/hqy/R4wFwU2Ob5uBKVIdt4O01weNRzu22seyv+xqGymJwv9PHOHPaDX56B6Tf2N6MRg9Tj5bcmHSM9mgqKouX/m+qV1qMKdO4I0z78xOEHFzjEKIUPyiMKlf6FnqH+EbnIOCzXX7kIpRkUdzviaPkFZpWYMloOhFI6rSKDq1qR9GwtSPX83BFAsDKBsotqpY1sL90UGmMmrRf8Rhf7PsaXT2lf902kF3BHHUl9fANbpbI9oSdsOf0ZWdohZY0WqQ/c3Q/gl0T76p65EDL/YU62i6t2eCCYaH4S5OO4RnHdz8wjAyXsyYuliSePw0aSl3txVX3YHAq7oX0DjJr0Ng+nS2WxbQnreMdSRp1AlRPuliXcF721zHbsGug4QLLstL55AKStyl9CdtZaT7IECiB1PnjhDlovN5wdEqr7oxAF3onOFRYSfhDUOAyzd8Ub4LMc+13W1N+si0qHtXBwsNTeHt/P08AzLvcd975n3SxuWP+Qv2t8zLv1wEOwWrGGOnDPqDWTsZOdEYMnruE9clJGok3p2Hh68Ek2W0DnrK/fZRbydUgx08zQNFnxcRb1x6h33FqqKgDjJCdmszvlXc9ITuUARu2gkwpgMvQ2oduXlf0nl1T771Hv7L2YNFYKSh56IZ9bKGyY5FL6bCNpRcTg5k1Afj5LD3FyjmqHv/m9ycTl4e6UOXhAA5GHFZ6YShytKHEA1x68dAFKR6POKyVXJf97xMVvMjcwjRXEsR5/I4WzxVk4+vvmzyG7WHgt6/Ha3uztDhBh8ZxxFGp7EwxFB7xbjIynvAXpKCacOW+MV83AOc4U9FeB8TK9NrCdSRLziAQl4u1n2zxbilyHdFLYrgMwcjL9JmGlToGDIRYk78gYBSxGmXWrIOIHGjaUH98/Nd35EUG9t2ZtVZU+1WRdkkC3KE1Qj1mlBM7M7y2rcKlJJl8HSy5wOg4wk8kLVHgQotxVPmDkKtBTDm872PW6EXYMkUoykPlpozw53jtHE79xxqgCKPI0Vr2f4cWV3cIuUmSukyQdeLWTecbtzz1uEBkh5vNJFIo7DZLfwL8l2As4th0M1MvnJnklMn54owHdzsQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: +/eZJhitVNCiazNaxxAnhP/Z7GJqM6EbedzvKe83MOHIuZsliHfWejaAQ406sSkmJNne0zRA6MW94stsDybWirIF2m2nMBR9hSXTCzHZEA2eb9lqTFB/0xayODYRRD7f7QkFNgbRCf+TZ61834uHfcaDGkYMBNjUX20jOEL4YEqbPjltnjt1DY1uuewIKXoRquh+Uz2lPxOjT+0UxgABY9HbSQY+oUQwAP4szecHdEwBe53plr09Gh3BiCWtk5fXTEvUTeQQLj+lt9Z5uZG6x3V+33HEa8HOPGahCgoskcAKZ8SIzA77lwCUu93SDPgXBS0C5QWjIx8XAFr65LjUXU8148XD9aHgGMzp7/6liUA+Zz4/Lz7U9xLR76AYGx0W5XZBmY42LzxS5dqnLJhulo5Q/sDGRWNdC0loAjMIomqjvfz7uA7YAe4S5FQ0IAj7mb8lgdCoAGol3Jdsr/63ChIVez/7GqeS5z8jO788myEHYpxFD6DpmGaIw+kpVUceUYPCvnN9vPAwgysXr8WIkxiNb3eGQCWK0rLanBUjumHl2VM/KJyh1yeh6KcLq8IKEedBNWUecoeeJ/tpLgaj2LgpGgt+yCCTMHWuTtOHVRuNzRQcD1Q3uII8WCu6C4/Iltwp3skHjj6K9mpLVyeeVXeMpsDMgRJ5TfSdZnPvxnRQp3JjD4syLthrDVz9cvWpWxlhCZLAjFmiJc8+ZY9fLSGKlkmAgQG/ycVNe1vuEJ8Esd+F3rXm062Ktm1JYREOjRAG0aZKgyiEWXCnZk1+XLsy8C+WUhgHwNsJv7jzLMUVO/jJfKIcL3T9reqKCMqxzxNjE7DcIQIEV1NpJ6iYAEjWPgBGoZkh3zNXvaDqH3LDb4UxpeUJfhQqc727tTyAaTfOfHSHkEKES882G46T1MfbrOWThs43k9yMpgyxCNQg4wLryLu9DkDyMSoIG5qmxLTK6GyIoL/DmaEqEsMAlqfSIYv1V+Jw0KSn1JLeKRyWshzpLUTiGKpoWnyR80acwrOtZB9xoh44kmjo9Z452hp/j/w9Kd0I98GJ0U8udzklAWMKbSG9H6Bmvwi9nRDJ4fhIUw9m47ohBC3+pyUYfi+w+oNMdP8lhUaa4dFf9WErcxfjLeMREaj07vMtwvyGcUivIiPtiDe/CwV5ArIm7jG9mq7dDoEicu6p6wI89mz09sXdmYAuc087L0VUf5d5Nk7Ud+3jAZOwpviB6c7rundMoZKNsinxayQiwbV24mZWvMlKT/+rBhrtLpurRxycQMYRCqxDJPGDtjAwEaChBMRKREFjeYpRTQt3xdsPEiwBQDVh5gGDwoIBI9Pjxb+BA4sWJEQwJZnW/WOR2OD4MvDzfS4R6qW6TNvc2OtjlBeY8Xinf2DcsUNTx9UkAMSRaNdE0UhnbdRGEc4f6jl0Zsgilj/U848Pms8jSTDTbsh7IGoqoEDHP+7hdOdGvQe6B04c0TxBkFEsSYy0c9UVJlXHjHlP3EjHkNy1Sw4JTwM0ivZkR+dpm0xMaNOtIp3T/DBRfqdfBdgsVxG61gbTqB3T3tTziqwR/ZdT2TdAPOg2Mnod7PRVb5hwTN5i9jn2 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 98d895e4-4e19-4e64-caa8-08dc9643cb5d X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:55:10.0152 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: BYfcMwGOWBQA08pLBQl0N8W+2bB167QZ2Zrzqkta3KCKs36XsnVDKHI6RNdt8zy5VzjYil5I34xcOHdHl4Dltw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB7071 Currently DAX folio/page reference counts are managed differently to normal pages. To allow these to be managed the same as normal pages introduce dax_insert_pfn_pud. This will map the entire PUD-sized folio and take references as it would for a normally mapped page. This is distinct from the current mechanism, vmf_insert_pfn_pud, which simply inserts a special devmap PUD entry into the page table without holding a reference to the page for the mapping. Signed-off-by: Alistair Popple --- include/linux/huge_mm.h | 4 ++- include/linux/rmap.h | 14 +++++- mm/huge_memory.c | 108 ++++++++++++++++++++++++++++++++++++++--- mm/rmap.c | 48 ++++++++++++++++++- 4 files changed, 168 insertions(+), 6 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2aa986a..b98a3cc 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -39,6 +39,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); +vm_fault_t dax_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); enum transparent_hugepage_flag { TRANSPARENT_HUGEPAGE_UNSUPPORTED, @@ -106,6 +107,9 @@ extern struct kobj_attribute shmem_enabled_attr; #define HPAGE_PUD_MASK (~(HPAGE_PUD_SIZE - 1)) #define HPAGE_PUD_SIZE ((1UL) << HPAGE_PUD_SHIFT) +#define HPAGE_PUD_ORDER (HPAGE_PUD_SHIFT-PAGE_SHIFT) +#define HPAGE_PUD_NR (1<_large_mapcount); break; case RMAP_LEVEL_PMD: + case RMAP_LEVEL_PUD: atomic_inc(&folio->_entire_mapcount); atomic_inc(&folio->_large_mapcount); break; @@ -434,6 +447,7 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, atomic_add(orig_nr_pages, &folio->_large_mapcount); break; case RMAP_LEVEL_PMD: + case RMAP_LEVEL_PUD: if (PageAnonExclusive(page)) { if (unlikely(maybe_pinned)) return -EBUSY; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index db7946a..e1f053e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1283,6 +1283,70 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) return VM_FAULT_NOPAGE; } EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud); + +/** + * dax_insert_pfn_pud - insert a pud size pfn backed by a normal page + * @vmf: Structure describing the fault + * @pfn: pfn of the page to insert + * @write: whether it's a write fault + * + * Return: vm_fault_t value. + */ +vm_fault_t dax_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) +{ + struct vm_area_struct *vma = vmf->vma; + unsigned long addr = vmf->address & PUD_MASK; + pud_t *pud = vmf->pud; + pgprot_t prot = vma->vm_page_prot; + struct mm_struct *mm = vma->vm_mm; + pud_t entry; + spinlock_t *ptl; + struct folio *folio; + struct page *page; + + if (addr < vma->vm_start || addr >= vma->vm_end) + return VM_FAULT_SIGBUS; + + track_pfn_insert(vma, &prot, pfn); + + ptl = pud_lock(mm, pud); + if (!pud_none(*pud)) { + if (write) { + if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) { + WARN_ON_ONCE(!is_huge_zero_pud(*pud)); + goto out_unlock; + } + entry = pud_mkyoung(*pud); + entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma); + if (pudp_set_access_flags(vma, addr, pud, entry, 1)) + update_mmu_cache_pud(vma, addr, pud); + } + goto out_unlock; + } + + entry = pud_mkhuge(pfn_t_pud(pfn, prot)); + if (pfn_t_devmap(pfn)) + entry = pud_mkdevmap(entry); + if (write) { + entry = pud_mkyoung(pud_mkdirty(entry)); + entry = maybe_pud_mkwrite(entry, vma); + } + + page = pfn_t_to_page(pfn); + folio = page_folio(page); + folio_get(folio); + folio_add_file_rmap_pud(folio, page, vma); + add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR); + + set_pud_at(mm, addr, pud, entry); + update_mmu_cache_pud(vma, addr, pud); + +out_unlock: + spin_unlock(ptl); + + return VM_FAULT_NOPAGE; +} +EXPORT_SYMBOL_GPL(dax_insert_pfn_pud); #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ void touch_pmd(struct vm_area_struct *vma, unsigned long addr, @@ -1836,7 +1900,8 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); } else if (is_huge_zero_pmd(orig_pmd)) { - zap_deposited_table(tlb->mm, pmd); + if (!vma_is_dax(vma) || arch_needs_pgtable_deposit()) + zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); } else { struct folio *folio = NULL; @@ -2268,20 +2333,34 @@ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pud, unsigned long addr) { + pud_t orig_pud; spinlock_t *ptl; ptl = __pud_trans_huge_lock(pud, vma); if (!ptl) return 0; - pudp_huge_get_and_clear_full(vma, addr, pud, tlb->fullmm); + orig_pud = pudp_huge_get_and_clear_full(vma, addr, pud, tlb->fullmm); tlb_remove_pud_tlb_entry(tlb, pud, addr); - if (vma_is_special_huge(vma)) { + if (!vma_is_dax(vma) && vma_is_special_huge(vma)) { spin_unlock(ptl); /* No zero page support yet */ } else { - /* No support for anonymous PUD pages yet */ - BUG(); + struct page *page = NULL; + struct folio *folio; + + /* No support for anonymous PUD pages or migration yet */ + BUG_ON(vma_is_anonymous(vma) || !pud_present(orig_pud)); + + page = pud_page(orig_pud); + folio = page_folio(page); + folio_remove_rmap_pud(folio, page, vma); + VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); + VM_BUG_ON_PAGE(!PageHead(page), page); + add_mm_counter(tlb->mm, mm_counter_file(folio), -HPAGE_PUD_NR); + + spin_unlock(ptl); + tlb_remove_page_size(tlb, page, HPAGE_PUD_SIZE); } return 1; } @@ -2289,6 +2368,8 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, unsigned long haddr) { + pud_t old_pud; + VM_BUG_ON(haddr & ~HPAGE_PUD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PUD_SIZE, vma); @@ -2296,7 +2377,22 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, count_vm_event(THP_SPLIT_PUD); - pudp_huge_clear_flush(vma, haddr, pud); + old_pud = pudp_huge_clear_flush(vma, haddr, pud); + if (is_huge_zero_pud(old_pud)) + return; + + if (vma_is_dax(vma)) { + struct page *page = pud_page(old_pud); + struct folio *folio = page_folio(page); + + if (!folio_test_dirty(folio) && pud_dirty(old_pud)) + folio_mark_dirty(folio); + if (!folio_test_referenced(folio) && pud_young(old_pud)) + folio_set_referenced(folio); + folio_remove_rmap_pud(folio, page, vma); + folio_put(folio); + add_mm_counter(vma->vm_mm, mm_counter_file(folio), -HPAGE_PUD_NR); + } } void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, diff --git a/mm/rmap.c b/mm/rmap.c index e8fc5ec..e949e4f 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1165,6 +1165,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, atomic_add(orig_nr_pages, &folio->_large_mapcount); break; case RMAP_LEVEL_PMD: + case RMAP_LEVEL_PUD: first = atomic_inc_and_test(&folio->_entire_mapcount); if (first) { nr = atomic_add_return_relaxed(ENTIRELY_MAPPED, mapped); @@ -1306,6 +1307,12 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio, case RMAP_LEVEL_PMD: SetPageAnonExclusive(page); break; + case RMAP_LEVEL_PUD: + /* + * Keep the compiler happy, we don't support anonymous PUD mappings. + */ + WARN_ON_ONCE(1); + break; } } for (i = 0; i < nr_pages; i++) { @@ -1489,6 +1496,26 @@ void folio_add_file_rmap_pmd(struct folio *folio, struct page *page, #endif } +/** + * folio_add_file_rmap_pud - add a PUD mapping to a page range of a folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @vma: The vm area in which the mapping is added + * + * The page range of the folio is defined by [page, page + HPAGE_PUD_NR) + * + * The caller needs to hold the page table lock. + */ +void folio_add_file_rmap_pud(struct folio *folio, struct page *page, + struct vm_area_struct *vma) +{ +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + __folio_add_file_rmap(folio, page, HPAGE_PUD_NR, vma, RMAP_LEVEL_PUD); +#else + WARN_ON_ONCE(true); +#endif +} + static __always_inline void __folio_remove_rmap(struct folio *folio, struct page *page, int nr_pages, struct vm_area_struct *vma, enum rmap_level level) @@ -1521,6 +1548,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, partially_mapped = nr && atomic_read(mapped); break; case RMAP_LEVEL_PMD: + case RMAP_LEVEL_PUD: atomic_dec(&folio->_large_mapcount); last = atomic_add_negative(-1, &folio->_entire_mapcount); if (last) { @@ -1615,6 +1643,26 @@ void folio_remove_rmap_pmd(struct folio *folio, struct page *page, #endif } +/** + * folio_remove_rmap_pud - remove a PUD mapping from a page range of a folio + * @folio: The folio to remove the mapping from + * @page: The first page to remove + * @vma: The vm area from which the mapping is removed + * + * The page range of the folio is defined by [page, page + HPAGE_PUD_NR) + * + * The caller needs to hold the page table lock. + */ +void folio_remove_rmap_pud(struct folio *folio, struct page *page, + struct vm_area_struct *vma) +{ +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + __folio_remove_rmap(folio, page, HPAGE_PUD_NR, vma, RMAP_LEVEL_PUD); +#else + WARN_ON_ONCE(true); +#endif +} + /* * @arg: enum ttu_flags will be passed to this argument */ From patchwork Thu Jun 27 00:54:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713626 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2052.outbound.protection.outlook.com [40.107.220.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28F9B3A28B for ; Thu, 27 Jun 2024 00:55:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.52 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449719; cv=fail; b=PltxmtM5PR+wEsJpl4QMCcJRLMOHrLw0QOmLzbz4OiccRsStdpUQ6iJXudbNMsDozZdPn5PaAsuUfPgyWmwW6j3ptw5QD1/B5MJW6KxfqKfmbODrvVl0BvhDcVeeNx8rZhBQhVLjtwci2WhHkmZsEzeZInCbD131K3Gjwj8i2MM= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449719; c=relaxed/simple; bh=tELCw0g60lN4MeM46dJIH1odMeKhLHKsOAI1WHUXWd4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=oY7gUtls9HYH6GsmWkwlPf+lKhW/nNCO5YfGpGDQq6UJy7tFhHIQwA0FhTo7J82ljEmY0KTiiInnZDfrpxs2/ocjsae0ZsOXrXgeEms4/Aw7Ky1HIo/YhA93LUyPfqyf3A1o+dx+pKm6azdCAKG/Yz+a6Mh+qchLUS+qpJ6+QdU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=VKZmzpLR; arc=fail smtp.client-ip=40.107.220.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="VKZmzpLR" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hPxmlkzGguSOeBKDSjRlRzBYZvyhX1o49Ivq8KTJ7Ma/7Ut58TLppylM6tt8hVIzCFbcImPcRMrmvovuEIIv5e65UCg+aasDzMVUpQRajXEAmdAh/3vQOWNCpiIseuqqXx1wuMuAeWa/+Ep5Sp4G455amvouNpn5S9GYyy9vpxs8Pm3tJvO5ZHh5W6/M3kbWiY3LeUShO79JOZUf5IYyQrCnx7wQJowDs8K5FTBlVlYmqVQo5AFyDSkLg/KuUJ70OLe14Pnnd9AefNdEMfVt4+e8+kOU+818gx+LV8ibnt7VFYWEn4oAUjd2Wh8Ag1pRUS098wM5GFQHuRy9HmQGnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ELGyXLM0Cq2wdznU6H0lBS0YOvX3MhPfm7JPiof8KDM=; b=iQE4IrJhzlHduQjlzroYjhFkfv8o797JsAH3ZgXdlYl4d7bh+llEltbbS74mNYai1LFOyJDiLyqQ4oQ5OiIsczjEVFQOFGPYI9yxF2wfmq0K0dZ3p77Rgx6Tgr2hwz7A0Vk+Jt0y3ZTFPGMqwv8Fl1A5z4pHUJ4IaSkapmEd/YFwTv8hRJ3Kbz62iAPyaAOew+L4zpa5YYRVA+2e5Y59cbospiGHgbhsq3tzVCt3bI+HpH4x+koFA7zMbV63VsDzlrl+asxU8xzzpKdsiywHYjY/54KdHGHF89EXXE4iLkNs895ghC3D5CXRYx+RQZN9o1nkmdoLQkgAxx5wxd+o5g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ELGyXLM0Cq2wdznU6H0lBS0YOvX3MhPfm7JPiof8KDM=; b=VKZmzpLRDExQu/dLY9ZscPjy5gApBh7HORysMJ3miyM53jFhfzIENwMnbGJOEDBxBJ8SC9Sa6aYhokF7cgdwf/0ushUm4Ed6XB4aNsVDNkRlHwFdgsjMdYJx5MPwfDP/4y3uwV/5niar1+AAW257BAWR6is21WlPf8nUu/d2BPI/m+KeuhxWZpWe3xlliJEbCehoqZOwliaaRxRAD8LlH8BoUftF93fSy22uxiy0Vd6ezsoTKSeYgTBMCgEtGK42rHOsXRoS+XsjQCnaB6c611wCNbWG5lcISpoCbczWcTCtVsT5vOrxcgZ4hI6SBJ8nx+YsbtPGrUE1r+UxdE9K9g== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by MW6PR12MB7071.namprd12.prod.outlook.com (2603:10b6:303:238::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:55:15 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:55:15 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 08/13] huge_memory: Allow mappings of PMD sized pages Date: Thu, 27 Jun 2024 10:54:23 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0091.ausprd01.prod.outlook.com (2603:10c6:10:111::6) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|MW6PR12MB7071:EE_ X-MS-Office365-Filtering-Correlation-Id: bba96670-cd2a-4ae7-22f5-08dc9643ce36 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: rMz4qfOllfl6MVcIcIJSn5oSOmaCRN+QV0HJlp5WBTuLHSmxyGHUE3KPTnit8AOF4/Sn70E6wDhqujfbZxkdMaGBLnbA7CjG9HBDDcKffpyG4eAvSVt2wlMROm/OwjRESSNYZECvrC0HLkPxFmVBT9fb1yXXdsx+dZI2ZeTCqpOuZpPCMOwByPcLcyJ9XVkk8f/+zC2RdGYa6AET0pUuSIslPRswM2v4RER7hQ2Tu2M5YzqtuVBtF+zngV4nmvxjQxmjQaajr6AHnJbxfpAg0GJYgWEaYGL/V9U6ONjRbXsNDJBU0cMEVGTsU7Fk3Nt5dWE0OS+p3ZzRZ/4kQ2YGnm+u8gFynYn8GPs/IdY0JGpAIU0dXZbrV8kgEUaWVtku8MX+xCwobb7ba0dCmLglE9J+hiaDsf+F8pMQO6KLe0ryuO0w42UoYoPHovioO8Cen1Hm+2MaAmyLbMZyHeu1KMbdwsEjov8jGEqhqm7V+ZSJvq410WOAR/LVkjKPcoxW9SSzUIvgsM6yJWI7irZPoT8agTZhf1Qip6SP8EHyMUef0fIADLxGrZnCgL5edkhGLLc955VwWc+kc92Yr9PTJtlvhSbjkBMMSqM9mtS/XcB9GPvWZ3cVUra4Z1Xqb7rMaiMYWfLPNGrB1kJf/Hn5mVwzFr/hIdFFNKAJKPV6WEZSC3cEJNWylJLd5j5B7dwGFboh0YhJd+DcWdp/K3x7PNjGEcGD6wAdk4ax3bRmTjANEIRgrXPRz4Wn6PYMj2isBV/cFy6Q9iGvtaGJTYavNbFfcivVRJMXTyma1TIGPAP9d7KvYP+MUS9j3fpWEmJfAN95FNob8CV8MUyiMmhP/il4SWj9S+YrPdPlyJ6FWfdr9M+qvCxNHma+Cd+rwi7E/w+DdHEnG5+n5izmdFX8G/5yqUeUvVvmHUQ6l8gt4CvtsjtWgZFEOXueZtlW7vuj6KRwewqli0KjcS8V7jzbb7WSVKOIzU3ibpPD/87KCcwHEtCnGleYnJmYEzIJJY5SUOxa9y/0DqLtsKRus9hrNbwhss4OplS6hPkUfgPH76AyFD3hLdYvcLinyi5WVNw+bL7ZbQAn+L58nTW/7bkpgIB66n/YFJQUyzX45ZKnAKRj+WOdL4m/s6xwexdTQ8kZ9977QMZ3yasxmrcdstFnOCD6STEAoEjE5zd3MH0Rc2dheD1/3Cc0+hBBGNQZor6xvqilEixDNproSRbnwRMCK2PVQFdrbl5xgbZ9ntCdfxOCVnOhzdZ4HcubqMfm8g0yc3w4XD2dHu0jXNvlnk6ue4gu0TAg3yOCBXlWhSC9egomegPk+FGZWQkEExfRbwMUBrJkVofq08li9XCg/qBjrQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 8dpcV1iU12kDyqeVQWbA3AwxyrevQihuWYk9QJhMeBaWOtnocqkHU6TkP5vpjgB8AgQFXV377ForlLsHvPb5oHRJti9YhbfeFw3mF1CXn99NYNHD0M7fCdXpsXp5ki3CJjrSAtYoaoigEBbqV7BFlrDLkkCVp40GiJ8Dz87lCPg5HNIbw/D3m6osgrcMl6BjeKINf2Au31cDalT11/Zjj9K4pPR3BqajuuMS9DcRqNhGGBA8I/8nqhhsPyJZq56xQQpQGuIu0nf0kVdM/NI6307xMQrT0ubrf8FPBJ8LGqdG2TqCurtRPPxwONBTxG0FqKcSSzfhVk5tpgcZs1JyZh0EE7HFNaIo+9nr8KHML3RU63UzbvqQD0tm9JFCaSN0CL9E/H+Hp67RJftHWsc+fsZ1iPsxATa5ullY+FDDZI5hnphDEf02UYtM2qOGwKBqIB2fl340bCYb92kh6zjTUr9kTrvThsursjyphh07ALezlVpShHTooenTHuNvO9JQrJMau516Zfz8AmkJq6SbCdNaM3Oo5l05ZPSbcsnhYh0V3zR7IaJ1ff+oHxuZFrXGVXM+PTMYO21LguKYDkTx+478sSFnXG7swPHjZSgbI6TvhwbBrfll9VZdTpTe9L2TIIxGL8u5LOhr+5XqrvJDAC0TDKai57p6QPn2vGt7yKdPS0iI4jAk17YGqkLYNO8N7yr6HEVFZyD8QKgu/B1ox99xbDtx/Jx+/S1TIDVjuBeJyGtTxKtkvAorOge4oFVsWOWmJ+5YD9L+Gytg456gnWG4D9uuXhc7csOLnbkNCa8fpPoBVlfLZ+vjUQJw4R23OllnjY2jw+KdWeC9jEP98qUyDHRkXYbpqMj95gJIx73/hJwEJ/yOz6xFcOONlNYjkvE7rsO4MR7Hl9VJ8ujKVbNrYczRY1SG2oJ3DRTIZ27fJNTYuKDy39znfBL0Z9DRsrvEAqUURkyjvJnLyu2OGbCxpNPJns/+KuC0OzJs9owFCox8YEV+n+0ix2gVe4QJp2BvrjjLLIgOkwtyePzZktBCa1fjPnEfF/wwsZKgakGVmuDxwXuHXwrRwUJiBOFIu/1DO3y12wq5zVgL3T7u1+h45cVGXKkoHFmdDGYpkIaKA+K78+p3EU+PD6SPzwGqV8NvVu2myUGUNER+Q4mTeUftX9O02EIlB0HnkPN7ldcApNTk8qYDLh2r3+mxa6VMHlmIK0oE7qOoqQYnel63KjlBNNgCOohi8dT3QNqNiFzI4at8hihOywcwAiR1OIhH2VTLXak4AyiOEVWBPY+P+xVvKWS6Eo7wEL9LTncxzVKHZGhp7tRzf0v0+nz7TKuKizTw5cd4XEv7FOQT+sB7Gu8XMgbbxV5DmbQWmloXPGONUwVvtOwtsXdYjG7eTTiXe0O9qkXFqrpuMnPHIGVc+gzseLJqn3E055lz9TulFwi22Vz4IWQWpTC/bo/h9zahF10rlLzXlbo80sjuvpeBwL2F5wOdl9Bz3Mb3TlcrQ0Mqo0cRUDE6Nx95Sld0LYPIzMK73VUw52CqGFTpRE/NXp1WvPsBsyhOHzScfQiE+6xOZYVJWs8dROHgUT5m33RS X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: bba96670-cd2a-4ae7-22f5-08dc9643ce36 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:55:14.9269 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 6uCbDxtUlB0klJGjloN6NA003Q5dpU0Ewjx4TLFfVzS0yT4d9Rm+IFvq9CH+xf2vUGJRyewEJBu0Ei9t/LSBEg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB7071 Currently DAX folio/page reference counts are managed differently to normal pages. To allow these to be managed the same as normal pages introduce dax_insert_pfn_pmd. This will map the entire PMD-sized folio and take references as it would for a normally mapped page. This is distinct from the current mechanism, vmf_insert_pfn_pmd, which simply inserts a special devmap PMD entry into the page table without holding a reference to the page for the mapping. Signed-off-by: Alistair Popple --- include/linux/huge_mm.h | 1 +- mm/huge_memory.c | 70 ++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 71 insertions(+) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index b98a3cc..9207d8e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -39,6 +39,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); +vm_fault_t dax_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); vm_fault_t dax_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); enum transparent_hugepage_flag { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e1f053e..a9874ac 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1202,6 +1202,76 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) } EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd); +vm_fault_t dax_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) +{ + struct vm_area_struct *vma = vmf->vma; + unsigned long addr = vmf->address & PMD_MASK; + pmd_t *pmd = vmf->pmd; + struct mm_struct *mm = vma->vm_mm; + pmd_t entry; + spinlock_t *ptl; + pgtable_t pgtable = NULL; + struct folio *folio; + struct page *page; + + if (addr < vma->vm_start || addr >= vma->vm_end) + return VM_FAULT_SIGBUS; + + if (arch_needs_pgtable_deposit()) { + pgtable = pte_alloc_one(vma->vm_mm); + if (!pgtable) + return VM_FAULT_OOM; + } + + track_pfn_insert(vma, &vma->vm_page_prot, pfn); + + ptl = pmd_lock(mm, pmd); + if (!pmd_none(*pmd)) { + if (write) { + if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) { + WARN_ON_ONCE(!is_huge_zero_pmd(*pmd)); + goto out_unlock; + } + entry = pmd_mkyoung(*pmd); + entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + if (pmdp_set_access_flags(vma, addr, pmd, entry, 1)) + update_mmu_cache_pmd(vma, addr, pmd); + } + + goto out_unlock; + } + + entry = pmd_mkhuge(pfn_t_pmd(pfn, vma->vm_page_prot)); + if (pfn_t_devmap(pfn)) + entry = pmd_mkdevmap(entry); + if (write) { + entry = pmd_mkyoung(pmd_mkdirty(entry)); + entry = maybe_pmd_mkwrite(entry, vma); + } + + if (pgtable) { + pgtable_trans_huge_deposit(mm, pmd, pgtable); + mm_inc_nr_ptes(mm); + pgtable = NULL; + } + + page = pfn_t_to_page(pfn); + folio = page_folio(page); + folio_get(folio); + folio_add_file_rmap_pmd(folio, page, vma); + add_mm_counter(mm, mm_counter_file(folio), HPAGE_PMD_NR); + set_pmd_at(mm, addr, pmd, entry); + update_mmu_cache_pmd(vma, addr, pmd); + +out_unlock: + spin_unlock(ptl); + if (pgtable) + pte_free(mm, pgtable); + + return VM_FAULT_NOPAGE; +} +EXPORT_SYMBOL_GPL(dax_insert_pfn_pmd); + #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) { From patchwork Thu Jun 27 00:54:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713627 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2047.outbound.protection.outlook.com [40.107.220.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3558E134B1 for ; Thu, 27 Jun 2024 00:55:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.47 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449724; cv=fail; b=FjvukEB5yscMr6fbDn4q+YuygS0q/k4zypSdvpNz0FJroIHtcts4HvatNA5YWgULKlxwcfFgcgpAGO4dL7laLBnu7W6eZotMvx8Plmcjvtr2qFONzfGORmzO9j8N/zTPw2Af8uk0pWdL8R+61/gwjhHhw5lRzm9Qx+YkIUUpYHo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449724; c=relaxed/simple; bh=do6hmDIvouQcu9d/VTfwKTS1l7K2dr3FqYAZBx4BCKI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=HVgsNY1pe9CGwvJHGjpQQ6NUqVh+rtb46kofZhfFG3yWqdutXjmnP2n3AdpJFQ968n9+GczoQ4Zi6AxctvWIXSPuy8CIBjqm00FCeDJzwnirsqSftKHBvhIVcfCdaTUJ+rSGgCeyX4Q4JFYsG4Mdq9cqaQ5QYJBnML4fSSGVYdw= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Xc0yL4Ay; arc=fail smtp.client-ip=40.107.220.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Xc0yL4Ay" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=G9znHzrIrVcZRX9Cy9qS60vejZHSWrIoe4ODUYLP4CNi8DlQPX7dQy4IZ94ylR9OC0zsKdzP0sXwGvXydwNGklpKPbQp6ek/2Q5stuG47QMUs45AuGEQMksnasA2bh1bgGbQS340DeZgoKHdalcLEEjjQE3quByYd2t0QbmUG3aFHNaa7lb3CjXPUYaQIGsmkO+g3OBmtqLN5EVaIPGG0CWZaKL/MN++MSX6JHleYfcczanmj5SnKNMTJDpc36rSzhPXGbL90oYrWaT2b0zfQG2mKixygpP0pSGrb971JuvFqZSMdRXKWlhXGsGTDLRUevCsEVovz97h3GC15iIaPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sUvlDbp9yw/HTfRk86JXWsNbDjDqapH1fDB3mHrXfeQ=; b=Zo91HM0/5BB6543bif4HcxKoba5EzeVEoV19KfL4NTmZLixZDBfUQNgviQVgeGshDQA9IH91K9kakgywxs8jFBUd3J101AcJbAdU4oBoKMQohQ92oMdKAALXadlsdac/voyALJMxYIiFUVZm9HWw7r+QRQ2BEoQkLqB4OMDfcNbx/yCZTws2QN7/EbN7hKLJDUyP/Uq3cfyDRI1IVyOJMpCuUG27NaALaDi2/y5Vg/Rua0F3M2hXaEK9rtiM34+s2z9VtDu0LUxfUj8/7byai3XzoDkO69DBe5mxhT9U78uH5WydIYk/EYkzLt38tx77oyRaPWFP5Rh2DRC+Q58LHw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sUvlDbp9yw/HTfRk86JXWsNbDjDqapH1fDB3mHrXfeQ=; b=Xc0yL4AyGDgFKhpbhti2NS/0zExt1GwEIV+GMSa3CJlry4SNgGmdhKkko6VlI2HdIdF+BNN121sWExq0CzwRuTEZU3Xog2bsMCgCK+K53U5Tn2K9wtjscr8jZOBSGRzAzd89S+AGNpJOB6VtiHQuZ2hBsGGeXvOT2DNQilRY2SCukdjkPRth07w1GtIVYWJAjTbBHN2/HCpz7YHqXHfsQVNp/wBi2IK+Iqz34vdHYKH/UZXUQJrWuhJi1EM3VekiDtSWQpJ/rWbKV3XfFo7+slMU84tVI5XiztOH41JtnUJr2vsZgidyoYO63Lfqhyq9BMSRDCdrrazpitCLK13GUg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by MW6PR12MB7071.namprd12.prod.outlook.com (2603:10b6:303:238::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:55:20 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:55:20 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 09/13] gup: Don't allow FOLL_LONGTERM pinning of FS DAX pages Date: Thu, 27 Jun 2024 10:54:24 +1000 Message-ID: <74a9fc9e018e54d7afbeae166479e2358e0a1225.1719386613.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0158.ausprd01.prod.outlook.com (2603:10c6:10:1ba::6) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|MW6PR12MB7071:EE_ X-MS-Office365-Filtering-Correlation-Id: 40d6fed7-4bc1-4e3d-6bfc-08dc9643d161 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: OXJU+osojJGU+ellVt/DPgE3JDrng528/Qe5D+hlUxvdZs3Uz6HATFIa35wv7NeGVUYNJdlXzl1XL5wrZyEH1NQfvX0sspeai3m7Vyfc0Imk/1ECJR2bSrpop8Qy0NNBw5D1n7bgYKbYo7difYodGMGqygPnTsAWBDkFMMGyZzOk5hJk3KjU/q5eNkOrp6BhUnDAhzUsTbd6ZOJQ1VqSOROsIxXp319f+b9fwUQL2GntVWM3T/6IASsvsbuuJ7rF3QZA/Vxxm2B8nIKMvaPIoekUFP/btSP//yKYlIQUnYIUKNOy7QDmLKMuNCMYdr6FKw0bAmb45cmmnHmWYzEJig3XZBwIV+peSHqpOy0HZK4gE6DVn8ZhxPsaBN/4YGcaVMbLv2gmX5LU/Q9fom5NA9xT0l0u3Tt+vle66keL7tg+o1Sx18uH22yl/P6PqWQFfT/pgdVEzcz9USEJuZtOQh6qiVte+0JPf29nXRAWZvo6sP8nXhXyV23VH4YRMW97CarVpVigHe9IjWVDd3X3dY5o7NEFRs3NFzcRU7f7WGmDRrFRXzqxDNfgchwEXb532GAKk2OFY07EhthwHZSWDKkQwVjpQS9lD2reAhnDQVkAhfNYmrtofqUj3sqhxWGz9r8CpbOHfYcL3c6qiK8eeJl0cZSeCrZkkL0yGATYoazX7LpxNXdCMFTzPsrdYhMaUMmMG9NFh/BQRONhKY5Or2OhqiOr09vJT2DwdOjLnR9CU5Re8KpMmuEHT+qaDOC26ovLKcKeIwz0eTMa9D1GriMNKzTxhPc/n47Wf6rbEAjQuV3JFebmNYBq6eR8JEbFEU3iyEBxiLKKZxslZ3aItdI5g4uGsc6nujDHo9Mzhvzs8FBhsKXu69LeGb3fG9s9Y5vSBpHmNICZn+qLJ4zGL8v2U6KQdBcJEicY0dwHw7nHJpkEJtf/ygPsAMl/CJIy3CIyTa7YJYWfEVGT1d/X6aCUdqmma6Lg84D3hSArohyEh6vKf3PbU2Xsc+kUIU2aN/3V58bYWUJTYeTbaEOWnsXQX5u7yNDa/7CIgnHEk4CGrWV04zJ8ypO2C8k+LdPEG7ahZe3TDHqoApEDzC5WJIDQqcypBF8kX6WyoCpvWgfdCa+W0X+hFo5bQsbsQq2TxFeHIg8ouLx7yKYvemVismu1vQNNZYtF+4MpwGAq4qFZHGfJTgFFeMuIm5vgukRuYju4nj8h++TO/hg4ohtQ7Z1XmRwCDKxsYU/5LAdcB/4U4StDYhEestsCr6PhsHISwJxFMnicNRcU/UShd2DmxvBOAW22VQYvRSnwL7rIj6eNO6tKRmP7naElRzhHnrxbPbCO+ipmARLzMKhBzKYzCg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: znM7iM42KAeugS+dw8+xGxPz6IqvPr4WVWxPoxstzqhTkr4emJTOkyfgjA5FJD5by6NsRElM3gQGTCzz15H6XEw9f02ABPeM+wh78JDNBx9SIyjH28uN5iMXT04ITMc9pvfPSBSpUihuolES0ohcAmVIzOok/AwjlZdmdPfTeSoFN9y8ib3gZF886fVeYlgsuTpwoOLYkB3QUxLBVWa31/QGojkgELhaRy7kbREiAwZPl7JOKl1M93B1sPRH3teR1xjafB8LA/N5dqHT0E1d+bfPM0HfmgcCgOTBJcmE2+zunv50CX8XL2yLp0/aLIVJQ5NUQRkm+x6KspgzlJ55p/Hrl9Yer1jP9lB0TqJlNxEy5p0W0M+JmEnhqOJSAVthhRe+Fg2HSVmwCIvem4SBg3QBo3u7Fl0Yd4mUzeOKh1v6gccfbpHN2wfABfLJ+gJKrViZAmWTK/xhMuOAbqX1oDa4aR/s4YhhBbC6T+ldaeZsKtQ0+zzmkRij60OzjmPY2fKpvgUpMG7E7S2TXPM8zTgrbdxRwvLACtcyb/VgNJOdBWLUzp0Ci2jUbTvi9Qi09o4/8m/sCjfnB1q3sQTQQHBpN6O9wymziwdQ/DbTUwSILya1nPsUsgYSf8Up+/rA1JsO89IGQNszgMPAcWlDTVuoYwE3ung9bKv2LJvdAEV3VerqBI0O4yjUDsnxNTYRq8rCbPpxSbPX1YXk+YSRODwHBZeV9kXxGcFdkYkWAd5wjw7Kyp1JrRypRDCI0NJeX5vx2B2/XbtdxNMNHgDCE5IHw6ookN46SL4e75Iutls/tO6iLjofv59OSePC5IDTXmD/WJFTyUkUNUbb9F4DNgrlsbcpRUbFo+xuGUIMktThFEhpKt5G2yXrAgj1wBlFTsX6Ejkv+zyXaVoFHaU6U5oK0BzvN5xRMiLiGiTIiH2y0zeskPvOwqCKhMPrZSPENxOG4uy+Es0EANYMd/g3J/bE1gT1tJ4sjEucIhgGEEYueOyZaCWpdvTFHHwKhL/tFVnEYWPBFsYpB9iFbFI7W6BBkYA0rbSV5JMGNSwUipTkD0yyjPuI2cq8z08GegoAtXVPYTcXSjZbDvnzI5J0+7X+TPAAM3tj04AplILf7fg4oi1ONwiDvT0SMDOyrS1W6Okco8eQHaTf/o1x97pP1lFIOw0Qh0A43yOJm7ARO2LEZ9TnPujjG9e1KyFixc84yURVRLHmDpLZBaMEBt95b81osNrKkJd8D7ybj79pUlFiSQw/J0K79b/t5n/dumPdXcOqLO/TDKZKuXUcMyybCdRFNgxc239YVFEvZq8B6Vx3ar7+w291OnlXZlqHpenm11wSXsT2mrgmss3tDOK2QwxK9XHH55G86HlaHbR/upKQbLtCyF46viFv4cyN9NyyA4M3RYH23zn8FUWawAlZ0gsLGXY0E+EMkmYakzETiLlUwj9HgsZoujvTSivNCxDWsXowTv+ZmJxmq9aeOsdN10e1jzEKJ2fZ02yBobnHG4Y6O+oNUYZqrRPyVZeyK75liv35wgfV1GX5xmpt7y+yfp8UaUEk5XB4MRZ9OZoeXFbmQ8GIkiMQi2moxQq7dujl X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 40d6fed7-4bc1-4e3d-6bfc-08dc9643d161 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:55:20.1325 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: D2/4jReSwqt0mSuc1qZ/a4fsV9Nvdo5ZPQoNkqNZV2sGlw+yvnCRKpwMlsvWFV3U2O/K+jjguK3rcCM/MT125Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB7071 Longterm pinning of FS DAX pages should already be disallowed by various pXX_devmap checks. However a future change will cause these checks to be invalid for FS DAX pages so make folio_is_longterm_pinnable() return false for FS DAX pages. Signed-off-by: Alistair Popple --- include/linux/memremap.h | 11 +++++++++++ include/linux/mm.h | 4 ++++ 2 files changed, 15 insertions(+) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 6505713..19a448e 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -193,6 +193,17 @@ static inline bool folio_is_device_coherent(const struct folio *folio) return is_device_coherent_page(&folio->page); } +static inline bool is_device_dax_page(const struct page *page) +{ + return is_zone_device_page(page) && + page_dev_pagemap(page)->type == MEMORY_DEVICE_FS_DAX; +} + +static inline bool folio_is_device_dax(const struct folio *folio) +{ + return is_device_dax_page(&folio->page); +} + #ifdef CONFIG_ZONE_DEVICE void zone_device_page_init(struct page *page); void *memremap_pages(struct dev_pagemap *pgmap, int nid); diff --git a/include/linux/mm.h b/include/linux/mm.h index b84368b..4d1cdea 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2032,6 +2032,10 @@ static inline bool folio_is_longterm_pinnable(struct folio *folio) if (folio_is_device_coherent(folio)) return false; + /* DAX must also always allow eviction. */ + if (folio_is_device_dax(folio)) + return false; + /* Otherwise, non-movable zone folios can be pinned. */ return !folio_is_zone_movable(folio); From patchwork Thu Jun 27 00:54:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713628 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2060.outbound.protection.outlook.com [40.107.237.60]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DB824689 for ; Thu, 27 Jun 2024 00:55:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.60 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449735; cv=fail; b=mXrruBZdGxPGxgVd0GI53a/PhoOX1i7zx+CObe2R4ephvNSVem8WYXIErAWWEdQyCuu0hyFvupfjlBdpeKnNf4q/rOv0tU7Eze15VZ1KGU9X2bMETaLO7yztJxFGv9PIE41Np228uLf5avnsBMan49ITdra5Ism+FokAC4koXQc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449735; c=relaxed/simple; bh=HUnfuK3yWB2WdpNU0XspJ919qiGK8h+3k9qruhNhn1M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=L3FIv/inkubX6JkLfpTlZUIeywjCkikWF8t77qqAYl8nsU8rBonXUA0KMzOaWBmyTeM1Ir8fr3Z450cr6fVk8hyDt3b21eR8giwaiY1fam9+dRfHZRLhfJbN7d3IwaoJpEbuSKk7XdkNs/Ec7T8FqjQO6ze2ZIeY6qJcLJrVOkg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=CMC4zjSB; arc=fail smtp.client-ip=40.107.237.60 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="CMC4zjSB" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PVYOTJH20O/VfTmX+o4fAer0SlAbuTf4NU+uHjohfvcfeq4Nbiz/+QJFZY0ul3HNgVF8TRqOWC8S+5oFroHAghdUHBjVHTI5QEn1RfifC2GbH+ZkZdsavuts96Blq2mM3YpecxKc5D09lg0gt8JVFtVZZJOjdPHH9jkMrmI4giwT99iGRsENDS+aNtJeE6SW4yb9pzKXeZIFwIYIZirad6KXX+NyJm84nclEjWl5DM7mfJzKHMozzGAYyoidTOqVxRwGtlRLnKf8ZMtrCRZtXiX2nuPIeLRLk7PX50nDujinfOP12HJEMGy1rC2TnlBRq91dXOTGPEsy/14Sg1wvGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MG00LAug8gZSPh8zfav+ulz30+w5HyzYuRAWG5N/XZ8=; b=YgsLdq6x/oeS5znucmF7r01aGoZ+9jv6QK/iluctbZS5IOz2QHW+irtM5s33ngzMR3RAQW53DHdDus8GbtGPrGzyCFnm0FPH84PIAfi9fmUegUYEIhtypurSKfCW4wk/xy/IonUXojaciWddmKAXD/SgDI2jSK1jqbq1LwhI2ocbwl/gbvGAg7wLCXRMhQ+68XBpxGkyTYi9rSxY4d1X8ObNI56hFTxcDODcaDJLmEMue8tRSASIpnrw+kB6iOgv+7ZTXcElCZLVJgD+hYrhrbKP2PJkRBZmIBPJCy+Z7RSXU3p8DBlqr/YzZUODszwTgmPgiyWoDvEJGRN0LfyRAw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MG00LAug8gZSPh8zfav+ulz30+w5HyzYuRAWG5N/XZ8=; b=CMC4zjSBGEIJumw/ydV9WDv8aGG6UdAMgODMWIlpDtTlivgcFnwPa4VVaR/Bp64zX+pj9Ck+sLoPxZEOlZg8X2nggM197YBAyxW8FpdBBERFPoAfT9jizYwvdyS0Y0As/VghKme4Dl+kzTkXKxv/3C/feTBD4ROaNhWjzFYRRxGAWyNBoi1cYc3csAbJcrG0nOZ1ddukMvy/mUMAvZ2unm3WgIDHj/OZU5TbWPSRwJXZOXqDPX/6ECJZM3VIbeQKxW6DTNMsOy+nZJsVGop/Nw0ps+3ZXWCE74BT6K5HSXuQTrd65kKytsDWXSMTM2p9Fb3AJWTT1CADYWaO4WhN7A== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by DM4PR12MB6183.namprd12.prod.outlook.com (2603:10b6:8:a7::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:55:26 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:55:25 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 10/13] fs/dax: Properly refcount fs dax pages Date: Thu, 27 Jun 2024 10:54:25 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SYAPR01CA0043.ausprd01.prod.outlook.com (2603:10c6:1:1::31) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|DM4PR12MB6183:EE_ X-MS-Office365-Filtering-Correlation-Id: 07b4f164-f53c-43fa-0642-08dc9643d4b3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: nSqIiDV4t6/vatsirjaziHxDHAXRmzEwtwEZfJkRs8CC42PjkQnGxLb7vcPkIQjSVPSdnjSafxshTe+YLqSjGkka2a+MNvNs5rjt0Ca7J95g5flXuYynDkO3TN4syfBPTeVVuPXK+0n5UR0ZeXRmaBgYl3bfn+oL9YAreonYhLXi5tn8uJ4WwS3ADQ6HZzCDrFpQVoA/7a2YBaD5C8z6LwIGo4RE28kARSlH3gLcXuZaOhY9xJOe5LqOUU5xC52Mg5UdKI8rlz0gBtkeKj2KkHHBCMCp8bWNqzmCqIggduaYz/rv8EyJNOe+xfUSgHGr/cIbQXKn/xO92FMVxjangGAmgPWUuQ5xQhFh8omed+yVgnIRUUm26dYaFTaWwOalG/7CnMkh5IfESRhNfImaNW77kEVGhQ9vcXOE9QP5InqbNnB642M9qrm4c3J0FTf+1W/+rJ1lMKOdfoBYHfO1WHH7ZumYPJ6qBx8nLRxab80qOrIfTRZQrKXUF/VtlO12zR2fcun3z7goxcRoWTi8pOAzabnsWMzebL7zHrV1WtaWQ+12l/LNpx3Vn+NKugCeQ3qYGuuLcavG1w+Qz4NWWyMJAIBfbLu+rOxqdHLtJNNoyBF0b+2eX+bnXhOB9plIjb39PTjhZirt4BwhijCFRNU70Xo8ogWXpJtzqN/JfCZYq+laIoRZfHJTpnNeVaCec0wjGQF+cWvYOS/IXxYtipmVhGFLhe+EkVIsTPTPMWjvDJxwAGsYIVh6NvTMkhjgKd96PzYHVRRLMcgJFoyjWZacZajT4kBJEyXPz+d8BcyU7UhuG48/W117hSoFwCkr2/zwZWzDyCDkXwXlw1zKTAdehE537+2ekXMq+xeu7fLjwkyYw9T029uwQHgiyaaXATlTBD6GyiHteXMM0eUqviUXEvhcUo3vHW2AoJtJvPxJ7ugTGLE+WP7vVYHSwVrbPoiDzvME6lXINQxMkE19BzK6ZacN+Nt2k2sXihr9pT8xwddEcibAnsyUVqfE5szkwDy5eiiWU2i4EitHIvRV9sw0DsX4B1Wr7MscEli9DR9PliT+dIuDV2Hvk4IJBHajft+wbSM3iwn030IggcWDY5iK0SXr2iIQn8iQ/4lc9Mf2GYmhfw7rd5yJsRu1VMxes2FYQbycSZ5vK2dCdHJ6keZ02+YlRdOJVB8qMgQ7YEq3mGQH7Y4Lgz9ZuFIxmJ1IP6z4YFT1HczHf/AbEfQYsm3X1Chb2ZzkVP8rqCShNMIpspoxqniGLF2xCYn9TqpD5lqTxs/5WBVT3G2jka9ER3yeR0yg6GBrpbrvr4SS2v13PGuk0yu41Vae5NjWtYMcWlQ7BuzeFAyRuXl36Nu26w== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: v8TZBUxD6ng0a076+OlBBHJvq3ROL7ZX68rakj/kjLalNhghvpun/up1ePk7xnYrVGaQWRymLxYCguPCNM6WtwItAljYAF1OFunwhfUpTngS83wCjWSO0TKsr6J+X5x0aKYs9bqZts/21Pj6n5l+TigXmx7qZRWUpwOMs6jQTB8Q/MmTlLvZ0UDcUb/cPZMQ3Ew1mjZKloQPC2Ao5P60yqYeMVKwu9M/eFv+g0ZMvAYCiKsBersxPh2K3u5UYykB4adKe5uQCJ1QoYwQYXUSzqdBqw3JIJwkXvbVS2+qFSwfzvjTl9uT8NGqk0JrgCemCNWvhvDuPyZGyLaVfgoa5O8O4RY38TVzNz9OngAN7yCNFLpln2jkKmRhlNVByrVQ6ygsZSGm8OT7X//yBpGbWcMJ7tx4C1QGOr67lpf2YFBCUcz1+i3l+KHzohyWk95tbgapayk5VqEbiHoidnVw1tIDo35gbiwAIromkEYMkXQoeYe4xQNBUoCtxsPpB6qC1EgvoDuNIgXzr9RnKW18ECMCUXxr4eMXjq0ZuYUpT4xsna7jrRFN1h2jcIEc1gmWKqo+mz9xRCmZmQVhTNaIzdpi2JkBVnMLaW6/GEyOdAe588AXmOXtwGWXGrmRHwmJjrFHKh2NstGfV3Z2wXo0HzD1VsBVqHJpmrJ1VJoouOmhbiI3HRAoOAtfXcCraCvLq2PBUrUshGtXYIVVxcNI0tdBvzKf0pNl+JLWErpEDDLXFttieURSq0oNzv2ch8lhltyt+UsFRltyq1R2y2K4jgPQzeJLRX+M1/b1MuEO072hy9JsHlUjaNU3NoGVVq5jT4bwVli0FQD03pH/tURUXtmVO+pTMVdCTWvFb76RVc5bYaukGgdwiAV9nJu5dB31Pli3LBdo1wNsKvE9N/VIXP7ib4pK7HckKkX4xtkxcA/q/bDNUrB3NEY4eyiz6xf/l2X9XyxRYY1z2AImhOvJeB6r+GqFjHA5r9JvvuTaxNyMfrbnzytLtlmUz6FsTWSjloeAV9Q1jrLh9GtolP3uRraWu0RrVSPrGKYFz4W8VcIyJu4m4UMRq5BQuP0v44J9yjf6jSnv4Lozbjw27iyc24sgXIRMLyPqZvUdlz9GBUelaAFmnW5REZM/xoEK0ImUivK7x7aSwGQVSa/v9LqPmv3KNot+7v3TneGLc/ZrAfk4+XCeIi16k2D1cib7dFhPhe8/OPTPLVqdwJl15iAGl8hXEzbrdJJpTCnQVSBn8onFWqkdYpQGvXHBtr/ICSV0VI/soA+bYzpNi9f6TuslayoXItFqiJPd9oNMXeW6ROwUS9UfaCkH5TofcRXc6ZEELZRP7Kk2RP04tCJEtvDaG+N8oqc78XJkJQR6tyvxIRxJjzYO3GQBP1rOjn+iJc7b2tWvMJ309KCMIRmcHTpsp3rMZe2fFUGOCSZxQ9IefuLm50CkdedunVmCQvEBzdINC5YsFCCr/ZJfyfRDCKCi3a6jgffW+4nGFxzHVeuGQ49mKMJ3vDdmxNpIlkW1Pvv18bQMsxHO5bQOxqKZzRTS9JPfv33ZxpMTKSAIQZ7kqaYZ5Xph+AJVAtpQNn6ulaGy X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 07b4f164-f53c-43fa-0642-08dc9643d4b3 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:55:25.7935 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 25o7sXACLvNO7sCPoAk0Ug8NOBlETBPZWGc71kVBUzotpZ5cL5ioCLrOcQ6YQzOKDW0FEXb3WU9itrPVFNmw9g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6183 Currently fs dax pages are considered free when the refcount drops to one and their refcounts are not increased when mapped via PTEs or decreased when unmapped. This requires special logic in mm paths to detect that these pages should not be properly refcounted, and to detect when the refcount drops to one instead of zero. On the other hand get_user_pages(), etc. will properly refcount fs dax pages by taking a reference and dropping it when the page is unpinned. Tracking this special behaviour requires extra PTE bits (eg. pte_devmap) and introduces rules that are potentially confusing and specific to FS DAX pages. To fix this, and to possibly allow removal of the special PTE bits in future, convert the fs dax page refcounts to be zero based and instead take a reference on the page each time it is mapped as is currently the case for normal pages. This may also allow a future clean-up to remove the pgmap refcounting that is currently done in mm/gup.c. Signed-off-by: Alistair Popple --- drivers/dax/device.c | 12 +- drivers/dax/super.c | 2 +- drivers/nvdimm/pmem.c | 8 +-- fs/dax.c | 193 +++++++++++++++++--------------------- fs/fuse/virtio_fs.c | 3 +- include/linux/dax.h | 4 +- include/linux/mm.h | 25 +----- include/linux/page-flags.h | 6 +- mm/gup.c | 9 +-- mm/huge_memory.c | 6 +- mm/internal.h | 2 +- mm/memory-failure.c | 6 +- mm/memremap.c | 24 +----- mm/mlock.c | 2 +- mm/mm_init.c | 3 +- mm/swap.c | 2 +- 16 files changed, 123 insertions(+), 184 deletions(-) diff --git a/drivers/dax/device.c b/drivers/dax/device.c index eb61598..b7a31ae 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -126,11 +126,11 @@ static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax, return VM_FAULT_SIGBUS; } - pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP); + pfn = phys_to_pfn_t(phys, 0); dax_set_mapping(vmf, pfn, fault_size); - return vmf_insert_mixed(vmf->vma, vmf->address, pfn); + return dax_insert_pfn(vmf->vma, vmf->address, pfn, vmf->flags & FAULT_FLAG_WRITE); } static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax, @@ -169,11 +169,11 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax, return VM_FAULT_SIGBUS; } - pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP); + pfn = phys_to_pfn_t(phys, 0); dax_set_mapping(vmf, pfn, fault_size); - return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE); + return dax_insert_pfn_pmd(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE); } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD @@ -214,11 +214,11 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax, return VM_FAULT_SIGBUS; } - pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP); + pfn = phys_to_pfn_t(phys, 0); dax_set_mapping(vmf, pfn, fault_size); - return vmf_insert_pfn_pud(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE); + return dax_insert_pfn_pud(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE); } #else static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax, diff --git a/drivers/dax/super.c b/drivers/dax/super.c index aca71d7..d83196e 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -257,7 +257,7 @@ EXPORT_SYMBOL_GPL(dax_holder_notify_failure); void arch_wb_cache_pmem(void *addr, size_t size); void dax_flush(struct dax_device *dax_dev, void *addr, size_t size) { - if (unlikely(!dax_write_cache_enabled(dax_dev))) + if (unlikely(dax_dev && !dax_write_cache_enabled(dax_dev))) return; arch_wb_cache_pmem(addr, size); diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index cafadd0..da13dc1 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -510,7 +510,7 @@ static int pmem_attach_disk(struct device *dev, pmem->disk = disk; pmem->pgmap.owner = pmem; - pmem->pfn_flags = PFN_DEV; + pmem->pfn_flags = 0; if (is_nd_pfn(dev)) { pmem->pgmap.type = MEMORY_DEVICE_FS_DAX; pmem->pgmap.ops = &fsdax_pagemap_ops; @@ -519,7 +519,7 @@ static int pmem_attach_disk(struct device *dev, pmem->data_offset = le64_to_cpu(pfn_sb->dataoff); pmem->pfn_pad = resource_size(res) - range_len(&pmem->pgmap.range); - pmem->pfn_flags |= PFN_MAP; + blk_queue_flag_set(QUEUE_FLAG_DAX, q); bb_range = pmem->pgmap.range; bb_range.start += pmem->data_offset; } else if (pmem_should_map_pages(dev)) { @@ -529,7 +529,7 @@ static int pmem_attach_disk(struct device *dev, pmem->pgmap.type = MEMORY_DEVICE_FS_DAX; pmem->pgmap.ops = &fsdax_pagemap_ops; addr = devm_memremap_pages(dev, &pmem->pgmap); - pmem->pfn_flags |= PFN_MAP; + blk_queue_flag_set(QUEUE_FLAG_DAX, q); bb_range = pmem->pgmap.range; } else { addr = devm_memremap(dev, pmem->phys_addr, @@ -547,8 +547,6 @@ static int pmem_attach_disk(struct device *dev, blk_queue_write_cache(q, true, fua); blk_queue_flag_set(QUEUE_FLAG_NONROT, q); blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q); - if (pmem->pfn_flags & PFN_MAP) - blk_queue_flag_set(QUEUE_FLAG_DAX, q); disk->fops = &pmem_fops; disk->private_data = pmem; diff --git a/fs/dax.c b/fs/dax.c index f93afd7..862af24 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -71,6 +71,11 @@ static unsigned long dax_to_pfn(void *entry) return xa_to_value(entry) >> DAX_SHIFT; } +static struct folio *dax_to_folio(void *entry) +{ + return page_folio(pfn_to_page(dax_to_pfn(entry))); +} + static void *dax_make_entry(pfn_t pfn, unsigned long flags) { return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT)); @@ -318,85 +323,51 @@ static unsigned long dax_end_pfn(void *entry) */ #define for_each_mapped_pfn(entry, pfn) \ for (pfn = dax_to_pfn(entry); \ - pfn < dax_end_pfn(entry); pfn++) + pfn < dax_end_pfn(entry); pfn++) -static inline bool dax_page_is_shared(struct page *page) +static void dax_device_folio_init(struct folio *folio, int order) { - return page->mapping == PAGE_MAPPING_DAX_SHARED; -} - -/* - * Set the page->mapping with PAGE_MAPPING_DAX_SHARED flag, increase the - * refcount. - */ -static inline void dax_page_share_get(struct page *page) -{ - if (page->mapping != PAGE_MAPPING_DAX_SHARED) { - /* - * Reset the index if the page was already mapped - * regularly before. - */ - if (page->mapping) - page->share = 1; - page->mapping = PAGE_MAPPING_DAX_SHARED; - } - page->share++; -} - -static inline unsigned long dax_page_share_put(struct page *page) -{ - return --page->share; -} + int orig_order = folio_order(folio); + int i; -/* - * When it is called in dax_insert_entry(), the shared flag will indicate that - * whether this entry is shared by multiple files. If so, set the page->mapping - * PAGE_MAPPING_DAX_SHARED, and use page->share as refcount. - */ -static void dax_associate_entry(void *entry, struct address_space *mapping, - struct vm_area_struct *vma, unsigned long address, bool shared) -{ - unsigned long size = dax_entry_size(entry), pfn, index; - int i = 0; + if (orig_order != order) { + struct dev_pagemap *pgmap = page_dev_pagemap(&folio->page); - if (IS_ENABLED(CONFIG_FS_DAX_LIMITED)) - return; + for (i = 0; i < (1UL << orig_order); i++) { + ClearPageHead(folio_page(folio, i)); + clear_compound_head(folio_page(folio, i)); - index = linear_page_index(vma, address & ~(size - 1)); - for_each_mapped_pfn(entry, pfn) { - struct page *page = pfn_to_page(pfn); + /* Reset pgmap which was over-written by prep_compound_page() */ + folio_page(folio, i)->pgmap = pgmap; + } + } - if (shared) { - dax_page_share_get(page); - } else { - WARN_ON_ONCE(page->mapping); - page->mapping = mapping; - page->index = index + i++; + if (order > 0) { + prep_compound_page(&folio->page, order); + if (order > 1) { + VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); + INIT_LIST_HEAD(&folio->_deferred_list); } } } -static void dax_disassociate_entry(void *entry, struct address_space *mapping, - bool trunc) +static void dax_associate_new_entry(void *entry, struct address_space *mapping, pgoff_t index) { - unsigned long pfn; + unsigned long order = dax_entry_order(entry); + struct folio *folio = dax_to_folio(entry); - if (IS_ENABLED(CONFIG_FS_DAX_LIMITED)) + if (!dax_entry_size(entry)) return; - for_each_mapped_pfn(entry, pfn) { - struct page *page = pfn_to_page(pfn); - - WARN_ON_ONCE(trunc && page_ref_count(page) > 1); - if (dax_page_is_shared(page)) { - /* keep the shared flag if this page is still shared */ - if (dax_page_share_put(page) > 0) - continue; - } else - WARN_ON_ONCE(page->mapping && page->mapping != mapping); - page->mapping = NULL; - page->index = 0; - } + /* + * We don't hold a reference for the DAX pagecache entry for the page. But we + * need to initialise the folio so we can hand it out. Nothing else should have + * a reference either. + */ + WARN_ON_ONCE(folio_ref_count(folio)); + dax_device_folio_init(folio, order); + folio->mapping = mapping; + folio->index = index; } static struct page *dax_busy_page(void *entry) @@ -406,7 +377,7 @@ static struct page *dax_busy_page(void *entry) for_each_mapped_pfn(entry, pfn) { struct page *page = pfn_to_page(pfn); - if (page_ref_count(page) > 1) + if (page_ref_count(page)) return page; } return NULL; @@ -620,7 +591,6 @@ static void *grab_mapping_entry(struct xa_state *xas, xas_lock_irq(xas); } - dax_disassociate_entry(entry, mapping, false); xas_store(xas, NULL); /* undo the PMD join */ dax_wake_entry(xas, entry, WAKE_ALL); mapping->nrpages -= PG_PMD_NR; @@ -743,7 +713,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping) EXPORT_SYMBOL_GPL(dax_layout_busy_page); static int __dax_invalidate_entry(struct address_space *mapping, - pgoff_t index, bool trunc) + pgoff_t index, bool trunc) { XA_STATE(xas, &mapping->i_pages, index); int ret = 0; @@ -757,7 +727,6 @@ static int __dax_invalidate_entry(struct address_space *mapping, (xas_get_mark(&xas, PAGECACHE_TAG_DIRTY) || xas_get_mark(&xas, PAGECACHE_TAG_TOWRITE))) goto out; - dax_disassociate_entry(entry, mapping, trunc); xas_store(&xas, NULL); mapping->nrpages -= 1UL << dax_entry_order(entry); ret = 1; @@ -894,9 +863,11 @@ static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf, if (shared || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) { void *old; - dax_disassociate_entry(entry, mapping, false); - dax_associate_entry(new_entry, mapping, vmf->vma, vmf->address, - shared); + if (!shared) { + dax_associate_new_entry(new_entry, mapping, + linear_page_index(vmf->vma, vmf->address)); + } + /* * Only swap our new entry into the page cache if the current * entry is a zero page or an empty entry. If a normal PTE or @@ -1084,9 +1055,7 @@ static int dax_iomap_direct_access(const struct iomap *iomap, loff_t pos, goto out; if (pfn_t_to_pfn(*pfnp) & (PHYS_PFN(size)-1)) goto out; - /* For larger pages we need devmap */ - if (length > 1 && !pfn_t_devmap(*pfnp)) - goto out; + rc = 0; out_check_addr: @@ -1189,11 +1158,14 @@ static vm_fault_t dax_load_hole(struct xa_state *xas, struct vm_fault *vmf, struct inode *inode = iter->inode; unsigned long vaddr = vmf->address; pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr)); + struct page *page = pfn_t_to_page(pfn); vm_fault_t ret; *entry = dax_insert_entry(xas, vmf, iter, *entry, pfn, DAX_ZERO_PAGE); - ret = vmf_insert_mixed(vmf->vma, vaddr, pfn); + page_ref_inc(page); + ret = dax_insert_pfn(vmf->vma, vaddr, pfn, false); + put_page(page); trace_dax_load_hole(inode, vmf, ret); return ret; } @@ -1212,8 +1184,13 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, pmd_t pmd_entry; pfn_t pfn; - zero_folio = mm_get_huge_zero_folio(vmf->vma->vm_mm); + if (arch_needs_pgtable_deposit()) { + pgtable = pte_alloc_one(vma->vm_mm); + if (!pgtable) + return VM_FAULT_OOM; + } + zero_folio = mm_get_huge_zero_folio(vmf->vma->vm_mm); if (unlikely(!zero_folio)) goto fallback; @@ -1221,29 +1198,23 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, *entry = dax_insert_entry(xas, vmf, iter, *entry, pfn, DAX_PMD | DAX_ZERO_PAGE); - if (arch_needs_pgtable_deposit()) { - pgtable = pte_alloc_one(vma->vm_mm); - if (!pgtable) - return VM_FAULT_OOM; - } - ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd); - if (!pmd_none(*(vmf->pmd))) { - spin_unlock(ptl); - goto fallback; - } + if (!pmd_none(*vmf->pmd)) + goto fallback_unlock; - if (pgtable) { - pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); - mm_inc_nr_ptes(vma->vm_mm); - } - pmd_entry = mk_pmd(&zero_folio->page, vmf->vma->vm_page_prot); + pmd_entry = mk_pmd(&zero_folio->page, vma->vm_page_prot); pmd_entry = pmd_mkhuge(pmd_entry); - set_pmd_at(vmf->vma->vm_mm, pmd_addr, vmf->pmd, pmd_entry); + if (pgtable) + pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); + set_pmd_at(vma->vm_mm, pmd_addr, vmf->pmd, pmd_entry); spin_unlock(ptl); trace_dax_pmd_load_hole(inode, vmf, zero_folio, *entry); return VM_FAULT_NOPAGE; +fallback_unlock: + spin_unlock(ptl); + mm_put_huge_zero_folio(vma->vm_mm); + fallback: if (pgtable) pte_free(vma->vm_mm, pgtable); @@ -1649,9 +1620,10 @@ static vm_fault_t dax_fault_iter(struct vm_fault *vmf, loff_t pos = (loff_t)xas->xa_index << PAGE_SHIFT; bool write = iter->flags & IOMAP_WRITE; unsigned long entry_flags = pmd ? DAX_PMD : 0; - int err = 0; + int ret, err = 0; pfn_t pfn; void *kaddr; + struct page *page; if (!pmd && vmf->cow_page) return dax_fault_cow_page(vmf, iter); @@ -1684,14 +1656,21 @@ static vm_fault_t dax_fault_iter(struct vm_fault *vmf, if (dax_fault_is_synchronous(iter, vmf->vma)) return dax_fault_synchronous_pfnp(pfnp, pfn); - /* insert PMD pfn */ + page = pfn_t_to_page(pfn); + page_ref_inc(page); + if (pmd) - return vmf_insert_pfn_pmd(vmf, pfn, write); + ret = dax_insert_pfn_pmd(vmf, pfn, write); + else + ret = dax_insert_pfn(vmf->vma, vmf->address, pfn, write); + + /* + * Insert PMD/PTE will have a reference on the page when mapping it so drop + * ours. + */ + put_page(page); - /* insert PTE pfn */ - if (write) - return vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn); - return vmf_insert_mixed(vmf->vma, vmf->address, pfn); + return ret; } static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, @@ -1932,6 +1911,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, order); void *entry; vm_fault_t ret; + struct page *page; xas_lock_irq(&xas); entry = get_unlocked_entry(&xas, order); @@ -1947,14 +1927,17 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) xas_set_mark(&xas, PAGECACHE_TAG_DIRTY); dax_lock_entry(&xas, entry); xas_unlock_irq(&xas); + page = pfn_t_to_page(pfn); + page_ref_inc(page); if (order == 0) - ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn); + ret = dax_insert_pfn(vmf->vma, vmf->address, pfn, true); #ifdef CONFIG_FS_DAX_PMD else if (order == PMD_ORDER) - ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE); + ret = dax_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE); #endif else ret = VM_FAULT_FALLBACK; + put_page(page); dax_unlock_entry(&xas, entry); trace_dax_insert_pfn_mkwrite(mapping->host, vmf, ret); return ret; @@ -2068,6 +2051,12 @@ EXPORT_SYMBOL_GPL(dax_remap_file_range_prep); void dax_page_free(struct page *page) { + /* + * Make sure we flush any cached data to the page now that it's free. + */ + if (PageDirty(page)) + dax_flush(NULL, page_address(page), page_size(page)); + wake_up_var(page); } EXPORT_SYMBOL_GPL(dax_page_free); diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c index 6e90a4b..4462ff6 100644 --- a/fs/fuse/virtio_fs.c +++ b/fs/fuse/virtio_fs.c @@ -873,8 +873,7 @@ static long virtio_fs_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, if (kaddr) *kaddr = fs->window_kaddr + offset; if (pfn) - *pfn = phys_to_pfn_t(fs->window_phys_addr + offset, - PFN_DEV | PFN_MAP); + *pfn = phys_to_pfn_t(fs->window_phys_addr + offset, 0); return nr_pages > max_nr_pages ? max_nr_pages : nr_pages; } diff --git a/include/linux/dax.h b/include/linux/dax.h index adbafc8..02dc580 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -218,7 +218,9 @@ static inline int dax_wait_page_idle(struct page *page, void (cb)(struct inode *), struct inode *inode) { - return ___wait_var_event(page, page_ref_count(page) == 1, + int i = 0; + + return ___wait_var_event(page, page_ref_count(page) == 1 || i++, TASK_INTERRUPTIBLE, 0, 0, cb(inode)); } diff --git a/include/linux/mm.h b/include/linux/mm.h index 4d1cdea..47d8923 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1440,25 +1440,6 @@ vm_fault_t finish_fault(struct vm_fault *vmf); * back into memory. */ -#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_FS_DAX) -DECLARE_STATIC_KEY_FALSE(devmap_managed_key); - -bool __put_devmap_managed_folio_refs(struct folio *folio, int refs); -static inline bool put_devmap_managed_folio_refs(struct folio *folio, int refs) -{ - if (!static_branch_unlikely(&devmap_managed_key)) - return false; - if (!folio_is_zone_device(folio)) - return false; - return __put_devmap_managed_folio_refs(folio, refs); -} -#else /* CONFIG_ZONE_DEVICE && CONFIG_FS_DAX */ -static inline bool put_devmap_managed_folio_refs(struct folio *folio, int refs) -{ - return false; -} -#endif /* CONFIG_ZONE_DEVICE && CONFIG_FS_DAX */ - /* 127: arbitrary random number, small enough to assemble well */ #define folio_ref_zero_or_close_to_overflow(folio) \ ((unsigned int) folio_ref_count(folio) + 127u <= 127u) @@ -1573,12 +1554,6 @@ static inline void put_page(struct page *page) { struct folio *folio = page_folio(page); - /* - * For some devmap managed pages we need to catch refcount transition - * from 2 to 1: - */ - if (put_devmap_managed_folio_refs(folio, 1)) - return; folio_put(folio); } diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 104078a..72c48af 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -682,12 +682,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted) #define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) #define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) -/* - * Different with flags above, this flag is used only for fsdax mode. It - * indicates that this page->mapping is now under reflink case. - */ -#define PAGE_MAPPING_DAX_SHARED ((void *)0x1) - static __always_inline bool folio_mapping_flags(const struct folio *folio) { return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) != 0; diff --git a/mm/gup.c b/mm/gup.c index 669583e..ce80ff6 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -89,8 +89,7 @@ static inline struct folio *try_get_folio(struct page *page, int refs) * belongs to this folio. */ if (unlikely(page_folio(page) != folio)) { - if (!put_devmap_managed_folio_refs(folio, refs)) - folio_put_refs(folio, refs); + folio_put_refs(folio, refs); goto retry; } @@ -156,8 +155,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) */ if (unlikely((flags & FOLL_LONGTERM) && !folio_is_longterm_pinnable(folio))) { - if (!put_devmap_managed_folio_refs(folio, refs)) - folio_put_refs(folio, refs); + folio_put_refs(folio, refs); return NULL; } @@ -198,8 +196,7 @@ static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) refs *= GUP_PIN_COUNTING_BIAS; } - if (!put_devmap_managed_folio_refs(folio, refs)) - folio_put_refs(folio, refs); + folio_put_refs(folio, refs); } /** diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a9874ac..5191f91 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1965,7 +1965,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, tlb->fullmm); arch_check_zapped_pmd(vma, orig_pmd); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); - if (vma_is_special_huge(vma)) { + if (!vma_is_dax(vma) && vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); @@ -2557,13 +2557,15 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, */ if (arch_needs_pgtable_deposit()) zap_deposited_table(mm, pmd); - if (vma_is_special_huge(vma)) + if (!vma_is_dax(vma) && vma_is_special_huge(vma)) return; if (unlikely(is_pmd_migration_entry(old_pmd))) { swp_entry_t entry; entry = pmd_to_swp_entry(old_pmd); folio = pfn_swap_entry_folio(entry); + } else if (is_huge_zero_pmd(old_pmd)) { + return; } else { page = pmd_page(old_pmd); folio = page_folio(page); diff --git a/mm/internal.h b/mm/internal.h index c72c306..b07e70e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -637,8 +637,6 @@ static inline void prep_compound_tail(struct page *head, int tail_idx) set_page_private(p, 0); } -extern void prep_compound_page(struct page *page, unsigned int order); - extern void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags); extern bool free_pages_prepare(struct page *page, unsigned int order); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index d3c830e..47491ef 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -411,18 +411,18 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma, pud = pud_offset(p4d, address); if (!pud_present(*pud)) return 0; - if (pud_devmap(*pud)) + if (pud_trans_huge(*pud)) return PUD_SHIFT; pmd = pmd_offset(pud, address); if (!pmd_present(*pmd)) return 0; - if (pmd_devmap(*pmd)) + if (pmd_trans_huge(*pmd)) return PMD_SHIFT; pte = pte_offset_map(pmd, address); if (!pte) return 0; ptent = ptep_get(pte); - if (pte_present(ptent) && pte_devmap(ptent)) + if (pte_present(ptent)) ret = PAGE_SHIFT; pte_unmap(pte); return ret; diff --git a/mm/memremap.c b/mm/memremap.c index 13c1d5b..2476aad 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -485,18 +485,20 @@ void free_zone_device_folio(struct folio *folio) * handled differently or not done at all, so there is no need * to clear folio->mapping. */ - folio->mapping = NULL; page_dev_pagemap(&folio->page)->ops->page_free(folio_page(folio, 0)); if (folio->page.pgmap->type == MEMORY_DEVICE_PRIVATE || folio->page.pgmap->type == MEMORY_DEVICE_COHERENT) put_dev_pagemap(folio->page.pgmap); - else if (folio->page.pgmap->type != MEMORY_DEVICE_PCI_P2PDMA) + else if (folio->page.pgmap->type != MEMORY_DEVICE_PCI_P2PDMA && + folio->page.pgmap->type != MEMORY_DEVICE_FS_DAX) /* * Reset the refcount to 1 to prepare for handing out the page * again. */ folio_set_count(folio, 1); + + folio->mapping = NULL; } void zone_device_page_init(struct page *page) @@ -510,21 +512,3 @@ void zone_device_page_init(struct page *page) lock_page(page); } EXPORT_SYMBOL_GPL(zone_device_page_init); - -#ifdef CONFIG_FS_DAX -bool __put_devmap_managed_folio_refs(struct folio *folio, int refs) -{ - if (folio->page.pgmap->type != MEMORY_DEVICE_FS_DAX) - return false; - - /* - * fsdax page refcounts are 1-based, rather than 0-based: if - * refcount is 1, then the page is free and the refcount is - * stable because nobody holds a reference on the page. - */ - if (folio_ref_sub_return(folio, refs) == 1) - wake_up_var(&folio->_refcount); - return true; -} -EXPORT_SYMBOL(__put_devmap_managed_folio_refs); -#endif /* CONFIG_FS_DAX */ diff --git a/mm/mlock.c b/mm/mlock.c index 30b51cd..03fa9e9 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -373,6 +373,8 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, unsigned long start = addr; ptl = pmd_trans_huge_lock(pmd, vma); + if (vma_is_dax(vma)) + ptl = NULL; if (ptl) { if (!pmd_present(*pmd)) goto out; diff --git a/mm/mm_init.c b/mm/mm_init.c index b7e1599..f11ee0d 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1016,7 +1016,8 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, */ if (pgmap->type == MEMORY_DEVICE_PRIVATE || pgmap->type == MEMORY_DEVICE_COHERENT || - pgmap->type == MEMORY_DEVICE_PCI_P2PDMA) + pgmap->type == MEMORY_DEVICE_PCI_P2PDMA || + pgmap->type == MEMORY_DEVICE_FS_DAX) set_page_count(page, 0); } diff --git a/mm/swap.c b/mm/swap.c index 67786cb..041cda6 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -983,8 +983,6 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) unlock_page_lruvec_irqrestore(lruvec, flags); lruvec = NULL; } - if (put_devmap_managed_folio_refs(folio, nr_refs)) - continue; if (folio_ref_sub_and_test(folio, nr_refs)) free_zone_device_folio(folio); continue; From patchwork Thu Jun 27 00:54:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713629 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2060.outbound.protection.outlook.com [40.107.237.60]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56A86481AA for ; Thu, 27 Jun 2024 00:55:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.60 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449736; cv=fail; b=t+yeI4ozvknvuSirgmwBn2Uc5Rj56nIKJLSGnryLVu8TSG/cyfKdZ2Qh/6TTt/5VCipENPuvc6jcZVoSyW2hixnBMRW2AkpFrgyvucQGlM/25dQH6LmyKOA0MeNlxZf2VoJk8vsEjYjzr/OdkGn+FCxnAaYrEZHSooo/pprs9KU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449736; c=relaxed/simple; bh=sTdfbR/WZAEtJsxC1hW17UI1uGQ+BxN8YO3xC7d8Gro=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=jy6HlJ9Dy1DsK0ljciTzZIRXOxx6PiDwROE0S2FPnqrPNQi/yWrY9U3K50hN9tKCbk3IU5tITK/HpVQKJGpIj4LQBrDvQOeVpFzxSh8l+OJLgiPaM2CHYQ0MNjkB0SBMIg0U1cNHE/eF3e8amCZQPJDx6Omo44cofdWufLNlM70= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=bMoKliPN; arc=fail smtp.client-ip=40.107.237.60 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="bMoKliPN" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=keJBOwRGFG/pJQwSNpKUkjNyRSWJeCklVuQhgx3s212I/NQaLQ6ABgwbLn9eAwIL75r3q41mtIV/zaBjdSY8TrY2oadbDxnsBWaqhf40nhYY6DTue8AlLgx4eF1U3mXqg+6NYlRtv2/OWkVt2HzIe2hsPNFzzimd8a4Uwef6fU0vxljPcQvbgFx/0I4XJokQdY7U5XAnGTJjASUjVzcIjZ+uim0urPZn7LxdL884z0sAbIw6XYSuypCcyu8+A7EJeOZv7b9h9TBEELc8wtEIGsCyoD0Gr7/wWYqaDMR+JrhWiNfLL6arJKQLTldrwu4AJ96tSSa2N0Wi2Ao3/N/LqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DsYaUBCkuUfLI8QjlhZUT1HmKGLGEWzFA52wvG9NRck=; b=WLWlQpP2t9e9LqsHkkyTGP1RTFsQp3YpGkDq1zC2IPgwbAvsThu+3j5NxbjUCPtVU6RE40Ho8Zl536a4iayqIEIXItK8UG4dZzGv5vn2aU8znXd4Y/m/WiuPxqH8oIL7HZY2A5v+pKTE75kOAwvDv3AXWnVaw7+mQ+gx/bcUtRzN8whVYWWTkHliwkcYgDTo1TgSyBZNP/4LOnfBThgJlugkPJ+HxWHJbwSaFSDeyBldF7EhYdo1hegEfaRt94Oob6gAZlqLQAoIHp0OSCCyBflvq+wUsYZ3p/WivZ2IVhHTohhWgYILrVmSOXT/+6SEx7IweyjGqSaavfTckRFUEA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DsYaUBCkuUfLI8QjlhZUT1HmKGLGEWzFA52wvG9NRck=; b=bMoKliPNLkTD5NJQHjSbpo0q0aR9jxToaqUxxewo4j9lJc8GQGmIsQNHbHRh0bxH2qJcsJpDbcMrrJPFSEmzmH6k5qi5np2FW+zWrHNP7oeujCmXkPxSfF0Jdzer+fwkGbwQunlTYTtmNVWszPjfYsYz5Q6gMIpJhmJd+pWgAbMaYuUGJOW3t8VWrIF4EN/w+woXy6Zh5Zjoz9ILqSqrDIq1T3Y4e+382UtTFJklNqeEPz5saf4YKUXJJ2eNv4wYRRcaecd0AOj777JYcmIjqZDAt3cJFSsec3+PasomRsKs2ByoW8okS9O113D4MAYHSXaal++dJWVJp8UVRrtE6Q== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by DM4PR12MB6183.namprd12.prod.outlook.com (2603:10b6:8:a7::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:55:30 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:55:30 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 11/13] huge_memory: Remove dead vmf_insert_pXd code Date: Thu, 27 Jun 2024 10:54:26 +1000 Message-ID: <400a4584f6f628998a7093aee49d9f86c592754b.1719386613.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY5PR01CA0090.ausprd01.prod.outlook.com (2603:10c6:10:1f5::11) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|DM4PR12MB6183:EE_ X-MS-Office365-Filtering-Correlation-Id: 6b039b55-0ef5-4bc4-9156-08dc9643d799 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: HhNt7wNo5Ywpna3dyfeB5cgUjaDxpxa3A173EDtjSYH+3Jx6l+UFNG1xvKpUPRAX3k+LD7tp2sojxyt3inm20AE4JyK1wjqPK9GLL3KBwsK6daRuFWdtD+mVp75O6qTFkP2SufskWm9Oq6tThJQq4HQBrTLR+gQfFuPNIcUq3X9rwrEef2lMZRu4fFoIGZkmqkUI9WXl7q2NbJWHVO2UHtXKL1m9hBYX+1uUGajsvE3bUBUjkCARV2IRwcYKp8BqCWnJNHFyfuQ75tF1wxOS3NRibxdkBLlpGD2VVnrDkAauK5Mf2v9uDUSC6+RvJdM1rtl1MhYsVPsdEeopT+6AiniXPDtpD5nYSd1U9JVz9L/P2MgSwjL0cQQ4DX4CAnbOJjWIfTuQSAELGITkok3UFGauUk9+lnGSGf9eq+gCRmvR+ZytEzrJebl2lYoqq3Xu5qgWpBl6z/X9qlt6YWlC5vUAiDwlgcUI8RM2NY60FVfTuJhl3jU2iMi+JFHtaUZRr77ootRYzlbHsXBf/qWesvehfUbmHatv8TZMPbIMBpbexGHUURc8Wwjql0VEhdq2IKo2dDLKV9jNjlQtT8X5e1hzVbloE07sIObgBXmTxtUv1Vdj/ErgRxtm8/RRwy2ZwoJaMFJ6MoS/oRBdeILXF08c4qX8YE8w1emk8IviHfB9c0O461PU3YoYUOjZY2SuRtCwjFRijOrLGSjw3SeR3V+qkxspxOuxA4i4uWczU1VOqIvXRKlwQ8v2LCNawLwkAP5SdcdJ6giYajacCa92Sop9DKrAinYD68XUPebXT+0pvfAWzQ4rK9MtbasDSZSqfo9m5pqbE56u7B4Y1JLszV+TBkBngM9pSB1Z2EV2bz+26FJdzsPVQRNOo+q8Tkp1fFcOINeDqSdgkcybEhBwdHal0FWoayhbyyE25zFg7MRcyTV8kqpeVYTAlPIf7+BjsUp9QwL6tWxtP/pCIKYCafvhdyPr3J+pZY+KcXMtShAx4oWRqg2505K/RoOheVp9lMz0S756Pepc83SjYMy/k5xnXMVnpsswyWjTLnvhOFV/zxT8x+Cltt7zJFGWuUv72YFezm9hI8+aRmxULmInO9GL8XDHzUiMx09X8zts6J5fbgWCdKN/pC5HPubvG5MSiREcXJHzX3w2T7g5Cvm+01DIwkaK88ZnZ3fE+0ClkZs8DDblAQbtH19925BrCEoEBqhnagoPwjeJtMcdvBzV15c6nR6mU/hdciFH8V8b7FkuNjg59yk8UFtfF6KS5B8KKsjCCISjSUdjjy3l5+BW3vXx1T+lYeioEleM+QlhRm8zrVZakUHCcnQmbh/6WnnUcrImJ2B/Jyb79a3CBay1SA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Q6vqjrh+WHB//ZsVEZXnUAayTkj9GiPFJL+U6P349cRtU/SyfNB8b1R5gHI2Wzyexm+4Gv+jyxECEvduJX6Fqpuu8HOV672p3Nb9AnPtJm34s7oNFZqv3xGu3uV79KwNvJ0W1KHLyBA0gLQvPtbZ3KbwSso3nERKV8KdQkD7spUCY8xb3rZZ5jiwJiZlv2fFCdjqxnE1pNzx61TdpBpLcbNt4b2vAQVKWxcYLYR4MZ2d403wzkpg6XLmgHwNBDCEpnzqmagaQBNNrCQjwUTj/EYpTNIUcRZpNth1PyHhk0a4lyse1ez78/iLhivuuar7E6AhBDSKxJ/aTyIBdrtFb4ap6TQSrIS7RKvo8RXiF6yySRvCdLO4mDql49REpxw0JzaYmHIvqMsEf+YJ3saG/ehG1rkRhoit20jzjlrSWJrdFApm6Xbmfbb3QWlHiRLRne3lvOPYvSJj/Kht7htxExLKBZVtTAlkMQpY+hUSbiuLPJTEX6XXWfWrMvYt9V6zNK5qPfHrKGqazLUsb4GBEsHh2S61taOMdnNQSC7XLmb1f3IK7k1rmtYBZeO3nbbmopcEQv1dAwlXs4l2jyn6Mq8qZPe704B9ku1kyocfrGoQrJYmKbYn9mWy1UhoihsUljTfaDpWlNX+m5LFNGbDsZPumCyAN02Ar9RljSG1BEDL+RUKnwWX5iEX/negiLj3Mab+pQZcnMznIrG1UxCdkpESJVQBzrt5d3IKipHVPWZt2+c58f3IOowrbMdUFzyzk54VRj5mqn4BSE1ZeicpyCaWXRh3R9jGn/vF5MLUUpSMhCIhbJ4NzuL48u7TA3FUBoAXO07WDbyXAr6C8tHFNURNOc7wFxAvU/xKzLoEogPUibfF4jXzGZ1M+yQQ2Y27AbSJHp/A20KjeTBq/5kC3DnVMEpItVPMR7JWWnwEVbrQh6/WQU7Uy2B742HPHefmVMYQpERfp27K1Lqa7wqw9+AytNOtowkUCROg14Ubzcd8bGGecRRrWw8+Ky7cKtgstShmTZTt4s913aogIMmHLSmDpLw1PZzPzXianu8ilZr7FaaYbc2a/7rnXAAop2qU4B6/Zur8Ie14wFVc5OXPOTkDwKeLRRaR/sUNdKdBWTJquHKqRE4/NYMI1LzqLRehXmmzKpp5+mm0nR+5gmsZMcWyrSktsZ9CeIbBNu7nKmb5KEXSSTGsobO9rJEHTHBKfjW821pjGegizkstB2maF10J0Pe2rpbbOJ5KjwGLMsEQe7HiTCdA8ljpbaNWWBjLCizoZlikkAUVVeeMJYysBdCe7RvLoUJAeF5ys+wkyAC9PWTXOaRcA6kVlNhZ/DepvwXhk+WXbDJ16XscCp19y7efUDzS7/UbbEsC7+txEYIUsFrq6QVAX+4+VhmzLxcdj97MhUmX7msZAwQ2uvzN3Pe4Z2HLTS4LblqAVDZ2LpWBS5isGnyyIA24nMiY9S77msX9K1wkAp95h2gZ5xg2mUZkWo/TzO3mo0eQA1DOHRydWTCT+DaG8CO0Ql3Ph7tFwscEc3L34lS7LObnPa8Sr7Kn5Ux3dknXb2KCH0fi2HazQtl/rxbaX2ajSdQRQN7R X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6b039b55-0ef5-4bc4-9156-08dc9643d799 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:55:30.6569 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: VuWJDxp6+7nYY5svXEl2hLOznerYxZwhAA4zUAMkMysQR00FTjk9Tc97SgBPurJYsZlPoV5tjwp6kiUwjYi3iQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6183 Now that DAX is managing page reference counts the same as normal pages there are no callers for vmf_insert_pXd functions so remove them. Signed-off-by: Alistair Popple --- include/linux/huge_mm.h | 2 +- mm/huge_memory.c | 165 +----------------------------------------- 2 files changed, 167 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 9207d8e..0fb6bff 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -37,8 +37,6 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, unsigned long cp_flags); -vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); -vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); vm_fault_t dax_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); vm_fault_t dax_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5191f91..de39af4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1111,97 +1111,6 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); } -static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write, - pgtable_t pgtable) -{ - struct mm_struct *mm = vma->vm_mm; - pmd_t entry; - spinlock_t *ptl; - - ptl = pmd_lock(mm, pmd); - if (!pmd_none(*pmd)) { - if (write) { - if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) { - WARN_ON_ONCE(!is_huge_zero_pmd(*pmd)); - goto out_unlock; - } - entry = pmd_mkyoung(*pmd); - entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); - if (pmdp_set_access_flags(vma, addr, pmd, entry, 1)) - update_mmu_cache_pmd(vma, addr, pmd); - } - - goto out_unlock; - } - - entry = pmd_mkhuge(pfn_t_pmd(pfn, prot)); - if (pfn_t_devmap(pfn)) - entry = pmd_mkdevmap(entry); - if (write) { - entry = pmd_mkyoung(pmd_mkdirty(entry)); - entry = maybe_pmd_mkwrite(entry, vma); - } - - if (pgtable) { - pgtable_trans_huge_deposit(mm, pmd, pgtable); - mm_inc_nr_ptes(mm); - pgtable = NULL; - } - - set_pmd_at(mm, addr, pmd, entry); - update_mmu_cache_pmd(vma, addr, pmd); - -out_unlock: - spin_unlock(ptl); - if (pgtable) - pte_free(mm, pgtable); -} - -/** - * vmf_insert_pfn_pmd - insert a pmd size pfn - * @vmf: Structure describing the fault - * @pfn: pfn to insert - * @write: whether it's a write fault - * - * Insert a pmd size pfn. See vmf_insert_pfn() for additional info. - * - * Return: vm_fault_t value. - */ -vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) -{ - unsigned long addr = vmf->address & PMD_MASK; - struct vm_area_struct *vma = vmf->vma; - pgprot_t pgprot = vma->vm_page_prot; - pgtable_t pgtable = NULL; - - /* - * If we had pmd_special, we could avoid all these restrictions, - * but we need to be consistent with PTEs and architectures that - * can't support a 'special' bit. - */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && - !pfn_t_devmap(pfn)); - BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == - (VM_PFNMAP|VM_MIXEDMAP)); - BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); - - if (addr < vma->vm_start || addr >= vma->vm_end) - return VM_FAULT_SIGBUS; - - if (arch_needs_pgtable_deposit()) { - pgtable = pte_alloc_one(vma->vm_mm); - if (!pgtable) - return VM_FAULT_OOM; - } - - track_pfn_insert(vma, &pgprot, pfn); - - insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write, pgtable); - return VM_FAULT_NOPAGE; -} -EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd); - vm_fault_t dax_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) { struct vm_area_struct *vma = vmf->vma; @@ -1280,80 +1189,6 @@ static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) return pud; } -static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, pfn_t pfn, bool write) -{ - struct mm_struct *mm = vma->vm_mm; - pgprot_t prot = vma->vm_page_prot; - pud_t entry; - spinlock_t *ptl; - - ptl = pud_lock(mm, pud); - if (!pud_none(*pud)) { - if (write) { - if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) { - WARN_ON_ONCE(!is_huge_zero_pud(*pud)); - goto out_unlock; - } - entry = pud_mkyoung(*pud); - entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma); - if (pudp_set_access_flags(vma, addr, pud, entry, 1)) - update_mmu_cache_pud(vma, addr, pud); - } - goto out_unlock; - } - - entry = pud_mkhuge(pfn_t_pud(pfn, prot)); - if (pfn_t_devmap(pfn)) - entry = pud_mkdevmap(entry); - if (write) { - entry = pud_mkyoung(pud_mkdirty(entry)); - entry = maybe_pud_mkwrite(entry, vma); - } - set_pud_at(mm, addr, pud, entry); - update_mmu_cache_pud(vma, addr, pud); - -out_unlock: - spin_unlock(ptl); -} - -/** - * vmf_insert_pfn_pud - insert a pud size pfn - * @vmf: Structure describing the fault - * @pfn: pfn to insert - * @write: whether it's a write fault - * - * Insert a pud size pfn. See vmf_insert_pfn() for additional info. - * - * Return: vm_fault_t value. - */ -vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) -{ - unsigned long addr = vmf->address & PUD_MASK; - struct vm_area_struct *vma = vmf->vma; - pgprot_t pgprot = vma->vm_page_prot; - - /* - * If we had pud_special, we could avoid all these restrictions, - * but we need to be consistent with PTEs and architectures that - * can't support a 'special' bit. - */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && - !pfn_t_devmap(pfn)); - BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == - (VM_PFNMAP|VM_MIXEDMAP)); - BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); - - if (addr < vma->vm_start || addr >= vma->vm_end) - return VM_FAULT_SIGBUS; - - track_pfn_insert(vma, &pgprot, pfn); - - insert_pfn_pud(vma, addr, vmf->pud, pfn, write); - return VM_FAULT_NOPAGE; -} -EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud); - /** * dax_insert_pfn_pud - insert a pud size pfn backed by a normal page * @vmf: Structure describing the fault From patchwork Thu Jun 27 00:54:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713630 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2073.outbound.protection.outlook.com [40.107.212.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C92CC4DA08 for ; Thu, 27 Jun 2024 00:55:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.73 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449743; cv=fail; b=Jtd1jzYpIXD92UsFZW1d8LJuFBuzkoZYQ4CtvbUreXcuC+n0Q6aPoAFwzhimhGsNofb/YMjZ+67K7U1o/bCekab+fxaXwEt/pp9ZIqMORhFxbsXXl/AhBvdkIeC9CsfVGH0wNj0hSqhQai71kdG/pxHe5ygprFsAjEfgZUF+Q+o= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449743; c=relaxed/simple; bh=S3f0Wx15tLp8boJHQ18LtG8yc0Du/27LfgeRpDREF58=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=ofsHA7GN7VKvErS9sK4f4/Qu1W4vSM8aqYKjLScDQ6sCQoiwexrNboQ7I6/Htulo61bw8s0wKPkm1xyVg3MgEgYSqKkl+5GRGBU051UapQcVJA+7S400vu0QrcF6t9Ke7Tk2v/HaoLt7ZBrWNk6V8+MrJt6Nj3J3dLj+4wieXsU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=goa3CXfa; arc=fail smtp.client-ip=40.107.212.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="goa3CXfa" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=a9EGabzd58YxyaXRw959nWnzF0WmNFAIoMgqZq/rjOaHGi6vT8lASRcmNHhXW6ZpG+Xt4jXe16OlAozb+3pH7IiKR/rxPdxO0oneJG9MxR+IYf4CtQQFogQPKh0tR2XNFSfJdX52Sq8GKC5XHYRRAWc56auo5GMMKAct+/nVFV8irhgnr6ifBb8zUv3fE45knmnUyKcIcQR/ZThuJEicNHKfh5AyoWrl4442oL1B4EoTi/ZMC2VguN6zTzlCnE17ptwsHynXETvG4THQOK/5BAhRknquVmj5IZngrSj6MNMQ+kBW2gALBVMN0c7s9pQMsYUBhu/IA3d0Z8NrhfDgcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZNlcwKbVSV6gcLf1634N2i8aW0lc1msc0REUdrIsRUw=; b=GeTQgFRM2oTgiM8uAXpZDdt1b26Etfi4E9eLteI+qm6OpbtzHctD3e6u4Z1Lef1GBnY7ncXhd8hQmngVAyoz6NYDoKLD66p2RgW1zmSvIXogMKv09ov6wS4vTx4sHETDbMvKNUVVpVQdq4IarPeQyRuB4AHt2mSaFeFFFCHD9CPH3bLcd72khtjnHl0IA/J8H3zconq0SAP81HMl59SDPM+KWgPGOm89Je80XMGD2oC/+5Mp+LKwKMup5hSkaMBcEFQxrlBzpVBet8FLcdHrVtDO/+HQS+9eBvdVeqBgO1IDquC8K11lJG82ySd9vbg8+R9qJT6Y7/gGS3Q53Xz3kg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZNlcwKbVSV6gcLf1634N2i8aW0lc1msc0REUdrIsRUw=; b=goa3CXfaXtbTs3hMM5wnFC71g7q5k4Mst+fAAg9MUxBdjY10U9n+FzVpHuXyV51k5igV5+LllxbM95j1osOzfiKfNNKAbWjaCfI+BlOK/PW/T+Pu6E8k+oiEGL7nr2YzOS92N2x9ejk1jOPqcM45GfIxIQhGN8ypmxDaEU/LR8auNSLS+jAHeCByQJARrR/DWYo4o6pDH0yqDMzzm6ACui5yxZVIxK9+pPNRoBQ+aBAXpOGaJfT44i5D40qk1OB9mkM7WkdKu/nZNzR5we557I2zOFqAiAYdqpz+aP9OkAWd+0g7MqCOhxrLNHqKV9gvmnOYqZgkYmgFLUrZhiwcKw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by DM4PR12MB6183.namprd12.prod.outlook.com (2603:10b6:8:a7::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:55:36 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:55:36 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 12/13] mm: Remove pXX_devmap callers Date: Thu, 27 Jun 2024 10:54:27 +1000 Message-ID: <05b504fd550ae6289e9e508012e3f35adea2e5db.1719386613.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY5P282CA0056.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:20a::18) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|DM4PR12MB6183:EE_ X-MS-Office365-Filtering-Correlation-Id: d513e4b2-e93e-43b1-4c16-08dc9643dade X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: iAco1S04G7v9IJaosGCYkEOe1rD64Mso3kBqsLskj3dsQy4Y/yMdH83tahrEp0+sqg/mdSWEgrUAo3/jjgDZcj9KKaWVCxcJnuvsP7rhovxIiwhpbRltnXxVVTZ4P4DRbH4OyV5psfwLybxro3fSJ70RvVQJV9ebP8mEU1S1ag0+JK7EuWDYSzUY41EoPN+r5FK5HvOpZm/8FaWSc/9u1iRAMB2G5aySFZBXhMq1M1UPtoNdP0ZmF4ctPAKTqHLhTH+9vB9bC5jscSE1fImCt3vNw+0UMryNWOPqR9Kgmbmt+Ed6ZaZTCWJ/cpAsf+NIkkUheRlxDMe39P5iv3TADD5aSBs7f1xG39Op7dZC6U/XiwjEOi2iLGHp3x5KlPuNAGiXUmq3SodtafaFVBwrqWLq/shjiSpmDtDOBKCAeQtmEaefW4Xm9FZDcjoXGghAMH7l+eN80DoTadVnh4y7G7ct3Z9Y0BkXcWn03CXMlySefmYZzc3y46znGuS6rOVG6KX8pDMUbux1QdSQEXAxUdqbwwqU5+p0O0VysMHbuRzhLjTyZEqYkDVABFTn3yRwKtXTE6zWEHqrQQZ9dDnzfEhXz6T/dhhXEd2Mbtnn4PyvxZ/uTj+J1OiVqK1AWFuHtNLTXAvUCuPUHl3iPEid1dZpoT2gDf5IRPusITkO1kCMWUGjqDENnqO6YctABZuLKtiMEXzp3fB6nPWDW8bi8KUBMQvMXkZnJG9cOPp0s9S4N9OjTCZ4mI0Fd4nhaBdjRIWueGsYuE6lVy/Ad7mDpbi8XDvDJ24jagFyU1h49oXE7lZUAnqJi9T1bOLpib9vEU+QOMP3QkbppbQYrXpTVf0ZQJiwnmYLjWAcZRCYwFV1pf+WLpUC2Yv8i8t7vVqjowWhZvYDnlnfbLpGfhRV8/Uj/U0RCsTH2AIzwqFaJoocgD5JWqoetf3TJX74IVvmuZg93YWx3NFbPyvQjh/ODTBja5w6kUBRenCXZDZJmhpAggA4kcMUmYRDlAl+BPxMbC99YAie6QrW4dZkhikwqY52WGkzyfcagfkGoIH0KQIutf9a6elF2+PxiExT1JtMqrjAN9Pm5dgcB0PSMPM1kaHW3HJ5eIKyyrND3ur5zfY/0J029elKO1a+cjRczqbD+yoI8juh38eDcwU4hYjRTiWiNr/I0tklTmCir069ReQ9CSyosuYxs5/uxNDalyxva/UOBfE6mmSYCoK/aJKl2pjIrAAKoFFj/QvSpvQP5Fit+cP4/wYZWZniq8cVeXGeQjfTQEEOP2nwLRioa1MI2452VeY8WjZtpWm1z5aEI4zyeJFLc3uh1yA2i5nKAOIse184OIJLdGhDOpyfAH9cQQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: SZIDJ5sR7a3bJnz9kf7tawRDi4kpjSsH4Vc+ijZuD2oVpSLCW/+xcgyPBEj34u3ztyjlwIZOb2av89Y6v3E2fWoQ/gdnETQFq3dZ0IEV6DPPs0zFVWbRN/xsnGNrCzBIsdDTQ0BEPPI0KyYn6yh8EdLizgvjG04ahmDPVy+6J3YMBFfFVMW32k8GiRgQhdu6Wd7dd0gO/pqWF4xNa0ACENqdYsMBkAqU6wThYmsjinJmwkX3NtC+lImPJc9CZcslZLGogTdpk+0iqhmuAJBUlAugjubfafYO+whVwANhDMU8lusN1fBzka+ZF+9HgoFuQQo36+XVEFG7oRtzEAb+2OTo0JBuLE/kKfOGXWcl4StShzLnlZweiKIr8LBWqsp0nK5cU0PASdb614ZscUD4j06lmt3JxoeIWe/eOrsIc0HNtfunxPvQikxxW4W+/SdrmKyUBSSoP3leJUGYPag4u0NoGFpFziCxHXe/GBrqlliuBo6yI9w/qL5l6F0cqqA/W2cKvPU99d8/R0q02XRCHgBvs5xgeZaLq+mHwqyyI8LK9myrPT25+x1EVveZNx6HIx86qFERlAX1lgx5X7I8AUpoqGSjjGzDAzOQnAL2ECuTC/Jj/ycUpht8oVrLQp4qH/OhRtMQTHW3P3mpHXNkD31FdqNYNWsUZo/QBrODougy+i3DHe1nMUhea0dyEWYOMGvXjT/FSTDT59OCy4u5HQwYye4WJhGnVdb5feX9MZNl9EUiDMTTPNdtzXNHauSaB3Z0Lj/f/F9BZB08i6MApGv5d2Ny204LKDWNQLTLGqDugQ9SQ64c5y9Rhm/YeTurPs6xQIvQFomRyFhA05HKN5b6ht+SymQUXHv0Ft/4qfaWLw6KnmVs5gUY14tzZF/JIlaM56/6mxN6MZIW2d49Y9u7kdXkg0U6t2e7z0YxNQ1I5/oqDgwaXalMxAKzdK6U+CeSXU5QTSoDAowOwq9PATezbWowUZ+aog+eeTWdTVg3QUmhiNIIZHcl2DdqwwvlB0MDqXLVFwvHDRGQDAY/LfrO4vB8X6fzY5Vt0DQy8DySK6+P3YJXZHWAT4LnHqt58b0YQTjYVLAqbONn8hvP0SP223UBRhp98VK//tWr9AvFw1E6tLd1TTHodYquESHH3Xhwl9cVNdCTlUySvsqN9gjRTQHJhi9KyLl4ZvGi4uTwIVwwD51YJ3r+SUAWuis6Tf0+MoUEFT/GANTm3zJmRPg2g7O3VC+U6twvdKJasotEzEOkahueY6S7x4KJaQsh+JMEak3fGhGIaIebWNxWrL3aDctq17is6ggy88cOnlgHTVNfS9uo3wt86rcq4dKz9Mo429cX/ajnj7f1T75PmqVf5bVz3h9cNpHAcDyx3CYhinQuHfHNTrMsNaE3GhDfamy0mnl1u8eUzQafZ1/Dfy9YlCP9OltxOWsXNP36ddShDFpZrzcaVlq/qrVVG+lC0w/nKP3fY9s9cI77Qz+5oxvHPtJ/eJ+yt7uyf02jIZWcnaiWUWn87MMkTqO+uKwJmzGWXn68oltg1yvZpweSy+TzCGB00h36qu3mXqy2voYlun8i5xBvuoplzywYBc1f X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: d513e4b2-e93e-43b1-4c16-08dc9643dade X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:55:36.1625 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: CyhwOgytOO5BuTzg+tPyx5uuc+zE7Bt+dxoeIH7oAQ/Rrxqtqtc+ss8jm5KlUL/lFb/eOytJsi0ujS096qYYVQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6183 The devmap PTE special bit was used to detect mappings of FS DAX pages. This tracking was required to ensure the generic mm did not manipulate the page reference counts as FS DAX implemented it's own reference counting scheme. Now that FS DAX pages have their references counted the same way as normal pages this tracking is no longer needed and can be removed. Almost all existing uses of pmd_devmap() are paired with a check of pmd_trans_huge(). As pmd_trans_huge() now returns true for FS DAX pages dropping the check in these cases doesn't change anything. However care needs to be taken because pmd_trans_huge() also checks that a page is not an FS DAX page. This is dealt with either by checking !vma_is_dax() or relying on the fact that the page pointer was obtained from a page list. This is possible because zone device pages cannot appear in any page list due to sharing page->lru with page->pgmap. Signed-off-by: Alistair Popple --- arch/powerpc/mm/book3s64/hash_pgtable.c | 3 +- arch/powerpc/mm/book3s64/pgtable.c | 8 +- arch/powerpc/mm/book3s64/radix_pgtable.c | 5 +- arch/powerpc/mm/pgtable.c | 2 +- fs/dax.c | 5 +- fs/userfaultfd.c | 2 +- include/linux/huge_mm.h | 10 +- include/linux/pgtable.h | 2 +- mm/gup.c | 164 +----------------------- mm/hmm.c | 7 +- mm/huge_memory.c | 61 +-------- mm/khugepaged.c | 2 +- mm/mapping_dirty_helpers.c | 4 +- mm/memory.c | 37 +---- mm/migrate_device.c | 2 +- mm/mprotect.c | 2 +- mm/mremap.c | 5 +- mm/page_vma_mapped.c | 5 +- mm/pgtable-generic.c | 7 +- mm/userfaultfd.c | 2 +- mm/vmscan.c | 5 +- 21 files changed, 53 insertions(+), 287 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c index 988948d..82d3117 100644 --- a/arch/powerpc/mm/book3s64/hash_pgtable.c +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c @@ -195,7 +195,7 @@ unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!hash__pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!hash__pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(mm, pmdp)); #endif @@ -227,7 +227,6 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres VM_BUG_ON(address & ~HPAGE_PMD_MASK); VM_BUG_ON(pmd_trans_huge(*pmdp)); - VM_BUG_ON(pmd_devmap(*pmdp)); pmd = *pmdp; pmd_clear(pmdp); diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 2975ea0..65dd1fe 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -50,7 +50,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, { int changed; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(vma->vm_mm, pmdp)); #endif changed = !pmd_same(*(pmdp), entry); @@ -70,7 +70,6 @@ int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address, { int changed; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pud_devmap(*pudp)); assert_spin_locked(pud_lockptr(vma->vm_mm, pudp)); #endif changed = !pud_same(*(pudp), entry); @@ -182,7 +181,7 @@ pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma, pmd_t pmd; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); VM_BUG_ON((pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && - !pmd_devmap(*pmdp)) || !pmd_present(*pmdp)); + || !pmd_present(*pmdp)); pmd = pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp); /* * if it not a fullmm flush, then we can possibly end up converting @@ -200,8 +199,7 @@ pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma, pud_t pud; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); - VM_BUG_ON((pud_present(*pudp) && !pud_devmap(*pudp)) || - !pud_present(*pudp)); + VM_BUG_ON(!pud_present(*pudp)); pud = pudp_huge_get_and_clear(vma->vm_mm, addr, pudp); /* * if it not a fullmm flush, then we can possibly end up converting diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 15e88f1..1c195bc 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1348,7 +1348,7 @@ unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned long add unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!radix__pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!radix__pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(mm, pmdp)); #endif @@ -1365,7 +1365,7 @@ unsigned long radix__pud_hugepage_update(struct mm_struct *mm, unsigned long add unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pud_devmap(*pudp)); + WARN_ON(!pud_trans_huge(*pudp)); assert_spin_locked(pud_lockptr(mm, pudp)); #endif @@ -1383,7 +1383,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre VM_BUG_ON(address & ~HPAGE_PMD_MASK); VM_BUG_ON(radix__pmd_trans_huge(*pmdp)); - VM_BUG_ON(pmd_devmap(*pmdp)); /* * khugepaged calls this for normal pmd */ diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 9e7ba9c..11d3b40 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -464,7 +464,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea, return NULL; #endif - if (pmd_trans_huge(pmd) || pmd_devmap(pmd)) { + if (pmd_trans_huge(pmd)) { if (is_thp) *is_thp = true; ret_pte = (pte_t *)pmdp; diff --git a/fs/dax.c b/fs/dax.c index 862af24..7edd18c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1714,7 +1714,7 @@ static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, * the PTE we need to set up. If so just return and the fault will be * retried. */ - if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) { + if (pmd_trans_huge(*vmf->pmd)) { ret = VM_FAULT_NOPAGE; goto unlock_entry; } @@ -1835,8 +1835,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, * the PMD we need to set up. If so just return and the fault will be * retried. */ - if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) && - !pmd_devmap(*vmf->pmd)) { + if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd)) { ret = 0; goto unlock_entry; } diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index eee7320..094401f 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -319,7 +319,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, goto out; ret = false; - if (!pmd_present(_pmd) || pmd_devmap(_pmd)) + if (!pmd_present(_pmd) || vma_is_dax(vmf->vma)) goto out; if (pmd_trans_huge(_pmd)) { diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 0fb6bff..eb3f444 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -322,8 +322,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, #define split_huge_pmd(__vma, __pmd, __address) \ do { \ pmd_t *____pmd = (__pmd); \ - if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd) \ - || pmd_devmap(*____pmd)) \ + if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd)) \ __split_huge_pmd(__vma, __pmd, __address, \ false, NULL); \ } while (0) @@ -338,8 +337,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, #define split_huge_pud(__vma, __pud, __address) \ do { \ pud_t *____pud = (__pud); \ - if (pud_trans_huge(*____pud) \ - || pud_devmap(*____pud)) \ + if (pud_trans_huge(*____pud)) \ __split_huge_pud(__vma, __pud, __address); \ } while (0) @@ -362,7 +360,7 @@ static inline int is_swap_pmd(pmd_t pmd) static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) return __pmd_trans_huge_lock(pmd, vma); else return NULL; @@ -370,7 +368,7 @@ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { - if (pud_trans_huge(*pud) || pud_devmap(*pud)) + if (pud_trans_huge(*pud)) return __pud_trans_huge_lock(pud, vma); else return NULL; diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 18019f0..91e06bb 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1620,7 +1620,7 @@ static inline int pud_trans_unstable(pud_t *pud) defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) pud_t pudval = READ_ONCE(*pud); - if (pud_none(pudval) || pud_trans_huge(pudval) || pud_devmap(pudval)) + if (pud_none(pudval) || pud_trans_huge(pudval)) return 1; if (unlikely(pud_bad(pudval))) { pud_clear_bad(pud); diff --git a/mm/gup.c b/mm/gup.c index ce80ff6..88fb92b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -699,31 +699,9 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, return NULL; pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - - if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && - pud_devmap(pud)) { - /* - * device mapped pages can only be returned if the caller - * will manage the page reference count. - * - * At least one of FOLL_GET | FOLL_PIN must be set, so - * assert that here: - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pudp, flags & FOLL_WRITE); - - ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap); - if (!ctx->pgmap) - return ERR_PTR(-EFAULT); - } - page = pfn_to_page(pfn); - if (!pud_devmap(pud) && !pud_write(pud) && - gup_must_unshare(vma, flags, page)) + if (!pud_write(pud) && gup_must_unshare(vma, flags, page)) return ERR_PTR(-EMLINK); ret = try_grab_page(page, flags); @@ -921,8 +899,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, page = vm_normal_page(vma, address, pte); /* - * We only care about anon pages in can_follow_write_pte() and don't - * have to worry about pte_devmap() because they are never anon. + * We only care about anon pages in can_follow_write_pte(). */ if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, page, vma, flags)) { @@ -930,18 +907,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, goto out; } - if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) { - /* - * Only return device mapping pages in the FOLL_GET or FOLL_PIN - * case since they are only valid while holding the pgmap - * reference. - */ - *pgmap = get_dev_pagemap(pte_pfn(pte), *pgmap); - if (*pgmap) - page = pte_page(pte); - else - goto no_page; - } else if (unlikely(!page)) { + if (unlikely(!page)) { if (flags & FOLL_DUMP) { /* Avoid special (like zero) pages in core dumps */ page = ERR_PTR(-EFAULT); @@ -1025,14 +991,6 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval))))) return follow_hugepd(vma, __hugepd(pmd_val(pmdval)), address, PMD_SHIFT, flags, ctx); - if (pmd_devmap(pmdval)) { - ptl = pmd_lock(mm, pmd); - page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); - spin_unlock(ptl); - if (page) - return page; - return no_page_table(vma, flags, address); - } if (likely(!pmd_leaf(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); @@ -2920,7 +2878,7 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, int *nr) { struct dev_pagemap *pgmap = NULL; - int nr_start = *nr, ret = 0; + int ret = 0; pte_t *ptep, *ptem; ptem = ptep = pte_offset_map(&pmd, addr); @@ -2944,16 +2902,7 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, if (!pte_access_permitted(pte, flags & FOLL_WRITE)) goto pte_unmap; - if (pte_devmap(pte)) { - if (unlikely(flags & FOLL_LONGTERM)) - goto pte_unmap; - - pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); - if (unlikely(!pgmap)) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - goto pte_unmap; - } - } else if (pte_special(pte)) + if (pte_special(pte)) goto pte_unmap; VM_BUG_ON(!pfn_valid(pte_pfn(pte))); @@ -3024,91 +2973,6 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, } #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ -#if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) -static int gup_fast_devmap_leaf(unsigned long pfn, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, int *nr) -{ - int nr_start = *nr; - struct dev_pagemap *pgmap = NULL; - - do { - struct folio *folio; - struct page *page = pfn_to_page(pfn); - - pgmap = get_dev_pagemap(pfn, pgmap); - if (unlikely(!pgmap)) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - - folio = try_grab_folio(page, 1, flags); - if (!folio) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - folio_set_referenced(folio); - pages[*nr] = page; - (*nr)++; - pfn++; - } while (addr += PAGE_SIZE, addr != end); - - put_dev_pagemap(pgmap); - return addr == end; -} - -static int gup_fast_devmap_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - unsigned long fault_pfn; - int nr_start = *nr; - - fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) - return 0; - - if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - return 0; - } - return 1; -} - -static int gup_fast_devmap_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - unsigned long fault_pfn; - int nr_start = *nr; - - fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) - return 0; - - if (unlikely(pud_val(orig) != pud_val(*pudp))) { - gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); - return 0; - } - return 1; -} -#else -static int gup_fast_devmap_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - BUILD_BUG(); - return 0; -} - -static int gup_fast_devmap_pud_leaf(pud_t pud, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, struct page **pages, - int *nr) -{ - BUILD_BUG(); - return 0; -} -#endif - static int gup_fast_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) @@ -3120,13 +2984,7 @@ static int gup_fast_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, if (!pmd_access_permitted(orig, flags & FOLL_WRITE)) return 0; - if (pmd_devmap(orig)) { - if (unlikely(flags & FOLL_LONGTERM)) - return 0; - return gup_fast_devmap_pmd_leaf(orig, pmdp, addr, end, flags, - pages, nr); - } - + // TODO: As a side-effect does this allow long-term pinning of DAX pages? page = pmd_page(orig); refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr); @@ -3164,13 +3022,7 @@ static int gup_fast_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, if (!pud_access_permitted(orig, flags & FOLL_WRITE)) return 0; - if (pud_devmap(orig)) { - if (unlikely(flags & FOLL_LONGTERM)) - return 0; - return gup_fast_devmap_pud_leaf(orig, pudp, addr, end, flags, - pages, nr); - } - + // TODO: FOLL_LONGTERM? page = pud_page(orig); refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr); @@ -3209,8 +3061,6 @@ static int gup_fast_pgd_leaf(pgd_t orig, pgd_t *pgdp, unsigned long addr, if (!pgd_access_permitted(orig, flags & FOLL_WRITE)) return 0; - BUILD_BUG_ON(pgd_devmap(orig)); - page = pgd_page(orig); refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr); diff --git a/mm/hmm.c b/mm/hmm.c index 26e1905..7f78b0b 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -298,7 +298,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, * fall through and treat it like a normal page. */ if (!vm_normal_page(walk->vma, addr, pte) && - !pte_devmap(pte) && !is_zero_pfn(pte_pfn(pte))) { if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) { pte_unmap(ptep); @@ -351,7 +350,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); } - if (pmd_devmap(pmd) || pmd_trans_huge(pmd)) { + if (pmd_trans_huge(pmd)) { /* * No need to take pmd_lock here, even if some other thread * is splitting the huge pmd we will get that event through @@ -362,7 +361,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, * values. */ pmd = pmdp_get_lockless(pmdp); - if (!pmd_devmap(pmd) && !pmd_trans_huge(pmd)) + if (!pmd_trans_huge(pmd)) goto again; return hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); @@ -429,7 +428,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, return hmm_vma_walk_hole(start, end, -1, walk); } - if (pud_leaf(pud) && pud_devmap(pud)) { + if (pud_leaf(pud) && vma_is_dax(walk->vma)) { unsigned long i, npages, pfn; unsigned int required_fault; unsigned long *hmm_pfns; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index de39af4..2e164c3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1151,8 +1151,6 @@ vm_fault_t dax_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) } entry = pmd_mkhuge(pfn_t_pmd(pfn, vma->vm_page_prot)); - if (pfn_t_devmap(pfn)) - entry = pmd_mkdevmap(entry); if (write) { entry = pmd_mkyoung(pmd_mkdirty(entry)); entry = maybe_pmd_mkwrite(entry, vma); @@ -1230,8 +1228,6 @@ vm_fault_t dax_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) } entry = pud_mkhuge(pfn_t_pud(pfn, prot)); - if (pfn_t_devmap(pfn)) - entry = pud_mkdevmap(entry); if (write) { entry = pud_mkyoung(pud_mkdirty(entry)); entry = maybe_pud_mkwrite(entry, vma); @@ -1267,46 +1263,6 @@ void touch_pmd(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pmd(vma, addr, pmd); } -struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pmd_pfn(*pmd); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pmd_lockptr(mm, pmd)); - - if (flags & FOLL_WRITE && !pmd_write(*pmd)) - return NULL; - - if (pmd_present(*pmd) && pmd_devmap(*pmd)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PMD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - ret = try_grab_page(page, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) @@ -1438,7 +1394,7 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, ret = -EAGAIN; pud = *src_pud; - if (unlikely(!pud_trans_huge(pud) && !pud_devmap(pud))) + if (unlikely(!pud_trans_huge(pud))) goto out_unlock; /* @@ -2210,8 +2166,7 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { spinlock_t *ptl; ptl = pmd_lock(vma->vm_mm, pmd); - if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || - pmd_devmap(*pmd))) + if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd))) return ptl; spin_unlock(ptl); return NULL; @@ -2228,7 +2183,7 @@ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) spinlock_t *ptl; ptl = pud_lock(vma->vm_mm, pud); - if (likely(pud_trans_huge(*pud) || pud_devmap(*pud))) + if (likely(pud_trans_huge(*pud))) return ptl; spin_unlock(ptl); return NULL; @@ -2278,7 +2233,7 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, VM_BUG_ON(haddr & ~HPAGE_PUD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PUD_SIZE, vma); - VM_BUG_ON(!pud_trans_huge(*pud) && !pud_devmap(*pud)); + VM_BUG_ON(!pud_trans_huge(*pud)); count_vm_event(THP_SPLIT_PUD); @@ -2311,7 +2266,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, (address & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl = pud_lock(vma->vm_mm, pud); - if (unlikely(!pud_trans_huge(*pud) && !pud_devmap(*pud))) + if (unlikely(!pud_trans_huge(*pud))) goto out; __split_huge_pud_locked(vma, pud, range.start); @@ -2379,8 +2334,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); - VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd) - && !pmd_devmap(*pmd)); + VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)); count_vm_event(THP_SPLIT_PMD); @@ -2603,8 +2557,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, VM_BUG_ON(freeze && !folio); VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || - is_pmd_migration_entry(*pmd)) { + if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd)) { /* * It's safe to call pmd_page when folio is set because it's * guaranteed that pmd is present. diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 774a97e..d4996ca 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -942,8 +942,6 @@ static int find_pmd_or_thp_or_none(struct mm_struct *mm, return SCAN_PMD_NULL; if (pmd_trans_huge(pmde)) return SCAN_PMD_MAPPED; - if (pmd_devmap(pmde)) - return SCAN_PMD_NULL; if (pmd_bad(pmde)) return SCAN_PMD_NULL; return SCAN_SUCCEED; diff --git a/mm/mapping_dirty_helpers.c b/mm/mapping_dirty_helpers.c index 2f8829b..208b428 100644 --- a/mm/mapping_dirty_helpers.c +++ b/mm/mapping_dirty_helpers.c @@ -129,7 +129,7 @@ static int wp_clean_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long end, pmd_t pmdval = pmdp_get_lockless(pmd); /* Do not split a huge pmd, present or migrated */ - if (pmd_trans_huge(pmdval) || pmd_devmap(pmdval)) { + if (pmd_trans_huge(pmdval)) { WARN_ON(pmd_write(pmdval) || pmd_dirty(pmdval)); walk->action = ACTION_CONTINUE; } @@ -152,7 +152,7 @@ static int wp_clean_pud_entry(pud_t *pud, unsigned long addr, unsigned long end, pud_t pudval = READ_ONCE(*pud); /* Do not split a huge pud */ - if (pud_trans_huge(pudval) || pud_devmap(pudval)) { + if (pud_trans_huge(pudval)) { WARN_ON(pud_write(pudval) || pud_dirty(pudval)); walk->action = ACTION_CONTINUE; } diff --git a/mm/memory.c b/mm/memory.c index 4f26a1f..bd80198 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -595,16 +595,6 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, return NULL; if (is_zero_pfn(pfn)) return NULL; - if (pte_devmap(pte)) - /* - * NOTE: New users of ZONE_DEVICE will not set pte_devmap() - * and will have refcounts incremented on their struct pages - * when they are inserted into PTEs, thus they are safe to - * return here. Legacy ZONE_DEVICE pages that set pte_devmap() - * do not have refcounts. Example of legacy ZONE_DEVICE is - * MEMORY_DEVICE_FS_DAX type in pmem or virtio_fs drivers. - */ - return NULL; print_bad_pte(vma, addr, pte, NULL); return NULL; @@ -680,8 +670,6 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, } } - if (pmd_devmap(pmd)) - return NULL; if (is_huge_zero_pmd(pmd)) return NULL; if (unlikely(pfn > highest_memmap_pfn)) @@ -1223,8 +1211,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pmd = pmd_offset(src_pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd) - || pmd_devmap(*src_pmd)) { + if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, @@ -1260,7 +1247,7 @@ copy_pud_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pud = pud_offset(src_p4d, addr); do { next = pud_addr_end(addr, end); - if (pud_trans_huge(*src_pud) || pud_devmap(*src_pud)) { + if (pud_trans_huge(*src_pud)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PUD_SIZE, src_vma); @@ -1698,7 +1685,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) { if (next - addr != HPAGE_PMD_SIZE) __split_huge_pmd(vma, pmd, addr, false, NULL); else if (zap_huge_pmd(tlb, vma, pmd, addr)) { @@ -1740,7 +1727,7 @@ static inline unsigned long zap_pud_range(struct mmu_gather *tlb, pud = pud_offset(p4d, addr); do { next = pud_addr_end(addr, end); - if (pud_trans_huge(*pud) || pud_devmap(*pud)) { + if (pud_trans_huge(*pud)) { if (next - addr != HPAGE_PUD_SIZE) { mmap_assert_locked(tlb->mm); split_huge_pud(vma, pud, addr); @@ -2326,10 +2313,7 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, } /* Ok, finally just insert the thing.. */ - if (pfn_t_devmap(pfn)) - entry = pte_mkdevmap(pfn_t_pte(pfn, prot)); - else - entry = pte_mkspecial(pfn_t_pte(pfn, prot)); + entry = pte_mkspecial(pfn_t_pte(pfn, prot)); if (mkwrite) { entry = pte_mkyoung(entry); @@ -2437,8 +2421,6 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn) /* these checks mirror the abort conditions in vm_normal_page */ if (vma->vm_flags & VM_MIXEDMAP) return true; - if (pfn_t_devmap(pfn)) - return true; if (pfn_t_special(pfn)) return true; if (is_zero_pfn(pfn_t_to_pfn(pfn))) @@ -2469,8 +2451,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, * than insert_pfn). If a zero_pfn were inserted into a VM_MIXEDMAP * without pte special, it would there be refcounted as a normal page. */ - if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && - !pfn_t_devmap(pfn) && pfn_t_valid(pfn)) { + if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pfn_t_valid(pfn)) { struct page *page; /* @@ -2514,8 +2495,6 @@ vm_fault_t dax_insert_pfn(struct vm_area_struct *vma, if (!pfn_t_valid(pfn_t)) return VM_FAULT_SIGBUS; - WARN_ON_ONCE(pfn_t_devmap(pfn_t)); - if (WARN_ON(is_zero_pfn(pfn) && write)) return VM_FAULT_SIGBUS; @@ -5528,7 +5507,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, pud_t orig_pud = *vmf.pud; barrier(); - if (pud_trans_huge(orig_pud) || pud_devmap(orig_pud)) { + if (pud_trans_huge(orig_pud)) { /* * TODO once we support anonymous PUDs: NUMA case and @@ -5569,7 +5548,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, pmd_migration_entry_wait(mm, vmf.pmd); return 0; } - if (pmd_trans_huge(vmf.orig_pmd) || pmd_devmap(vmf.orig_pmd)) { + if (pmd_trans_huge(vmf.orig_pmd)) { if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) return do_huge_pmd_numa_page(&vmf); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 4fdd8fa..4277516 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -596,7 +596,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, pmdp = pmd_alloc(mm, pudp, addr); if (!pmdp) goto abort; - if (pmd_trans_huge(*pmdp) || pmd_devmap(*pmdp)) + if (pmd_trans_huge(*pmdp)) goto abort; if (pte_alloc(mm, pmdp)) goto abort; diff --git a/mm/mprotect.c b/mm/mprotect.c index 8c6cd88..c717c74 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -391,7 +391,7 @@ static inline long change_pmd_range(struct mmu_gather *tlb, } _pmd = pmdp_get_lockless(pmd); - if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd) || pmd_devmap(_pmd)) { + if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd)) { if ((next - addr != HPAGE_PMD_SIZE) || pgtable_split_needed(vma, cp_flags)) { __split_huge_pmd(vma, pmd, addr, false, NULL); diff --git a/mm/mremap.c b/mm/mremap.c index 5f96bc5..57bb0b9 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -587,7 +587,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr); if (!new_pud) break; - if (pud_trans_huge(*old_pud) || pud_devmap(*old_pud)) { + if (pud_trans_huge(*old_pud)) { if (extent == HPAGE_PUD_SIZE) { move_pgt_entry(HPAGE_PUD, vma, old_addr, new_addr, old_pud, new_pud, need_rmap_locks); @@ -609,8 +609,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (!new_pmd) break; again: - if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || - pmd_devmap(*old_pmd)) { + if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd)) { if (extent == HPAGE_PMD_SIZE && move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd, new_pmd, need_rmap_locks)) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae5cc42..77da636 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -235,8 +235,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = pmdp_get_lockless(pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde) || - (pmd_present(pmde) && pmd_devmap(pmde))) { + if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; if (!pmd_present(pmde)) { @@ -251,7 +250,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); return true; } - if (likely(pmd_trans_huge(pmde) || pmd_devmap(pmde))) { + if (likely(pmd_trans_huge(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); if (!check_pmd(pmd_pfn(pmde), pvmw)) diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index a78a4ad..093c435 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -139,8 +139,7 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, { pmd_t pmd; VM_BUG_ON(address & ~HPAGE_PMD_MASK); - VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && - !pmd_devmap(*pmdp)); + VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp)); pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return pmd; @@ -153,7 +152,7 @@ pud_t pudp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, pud_t pud; VM_BUG_ON(address & ~HPAGE_PUD_MASK); - VM_BUG_ON(!pud_trans_huge(*pudp) && !pud_devmap(*pudp)); + VM_BUG_ON(!pud_trans_huge(*pudp)); pud = pudp_huge_get_and_clear(vma->vm_mm, address, pudp); flush_pud_tlb_range(vma, address, address + HPAGE_PUD_SIZE); return pud; @@ -293,7 +292,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) *pmdvalp = pmdval; if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval))) goto nomap; - if (unlikely(pmd_trans_huge(pmdval) || pmd_devmap(pmdval))) + if (unlikely(pmd_trans_huge(pmdval))) goto nomap; if (unlikely(pmd_bad(pmdval))) { pmd_clear_bad(pmd); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index defa510..dfd95e0 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1685,7 +1685,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, ptl = pmd_trans_huge_lock(src_pmd, src_vma); if (ptl) { - if (pmd_devmap(*src_pmd)) { + if (vma_is_dax(src_vma)) { spin_unlock(ptl); err = -ENOENT; break; diff --git a/mm/vmscan.c b/mm/vmscan.c index 2e34de9..e8badb4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3285,7 +3285,7 @@ static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned if (!pte_present(pte) || is_zero_pfn(pfn)) return -1; - if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) + if (WARN_ON_ONCE(pte_special(pte))) return -1; if (WARN_ON_ONCE(!pfn_valid(pfn))) @@ -3303,9 +3303,6 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned if (!pmd_present(pmd) || is_huge_zero_pmd(pmd)) return -1; - if (WARN_ON_ONCE(pmd_devmap(pmd))) - return -1; - if (WARN_ON_ONCE(!pfn_valid(pfn))) return -1; From patchwork Thu Jun 27 00:54:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13713631 X-Patchwork-Delegate: iweiny@gmail.com Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2061.outbound.protection.outlook.com [40.107.212.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4749A17BB7 for ; Thu, 27 Jun 2024 00:55:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.61 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449748; cv=fail; b=KJebWNu2FCGaWVyyBnA99BkLwxhD1qMcXUTVjMGKTfv5u9DV6DVqqbS4X5XFXrUx/POTUJrQ0KLE4RjMvempu7EgG9v/S30mDjxi/7UIHQNd4H3Z3G8UeCMin08G4MOoM5kUU3wbAgzSWN/QLRp1ZnkeGrYLb5ipLKlWBTuL7IE= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719449748; c=relaxed/simple; bh=81F3XEZf+FuPhJWvGLvOApJK6zqIj7WJWAJLqD0ATUw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=UjHm3zru1FCYQjh8DxLiRSL1w+tJ/DoBY807HnhLpsqKZ/1EuD5qKgvan/HWuQCzyM7eotGVwCOoCzJjQw8dM2PRqZvppXOBKpsFy0Dht/MlXCTSdAjIlWrNGYJe/rOjwGWG9DW71sikxcTVV8zBAiV6fuA1wUA5Wg/k1OxZawQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=CUl9WQvA; arc=fail smtp.client-ip=40.107.212.61 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="CUl9WQvA" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ozva/jAcJEve/GpmQSMwonmHXQ0qzGtLi+98kBmsrFbIInGN0Bfc8/ESONm/J/HsrSqW4386pI/EpsUa2f2zEa+zv8KAopJcm0uLozmqdRNehBbQs8O5rbkbQVfuDtUA4sRWJ5xwYhegzV4bSYtoHXxG4+sEYlCTnfQqfM6Rh1LYg1wZL5YrPXEhbtzZfjkJiyLZEpEhmeJOA/Qm/NrbBHyY1S+AGxgrrQtPplJqWYhjaFcC9mJUztt8i+0CRZc5aCJ3gG+2o3Ga2mvY78Gxj39EX+UefeRyqIWnIFEAMk9ZzSZRxCnvyM9hOCYboGxg7+Po8PUifgT53T9nrPgh/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=u1s3emlxJeLx8wwYD+zwTWMJq0zhxbvYqNDifCpxMsw=; b=A8Mh2oHhdyqD2f+JccQR75M1AoXw+q0EuArxSMyngf85VQZKXoDZP/DWFC2Emsa0kJJiB/IgmL/8Dnvl1Ra5tkkA3PRx3eixyzjuEuK74Vre5+McFd+GQfL4fZeNjgSEHt0WlBMHB923IXCOH5rzOxHcIwXZDxSDwP3TInL1ZFPmke/+ciCGyd10/+9xmhqlRxUXA9h8Hr79HU7Fu6jEvbY4jd0w4wMHoIIkyDq5jn6+/OZNDbZE88UMP2eZ83gGBGON93e9wmqv7y8KLBUq4ZdohDGDfGKFZy1FPiR25Nb62aVPqYTsNtvU+RcqA7XpDHxZPsHhfW81jzBWXbk9Tw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=u1s3emlxJeLx8wwYD+zwTWMJq0zhxbvYqNDifCpxMsw=; b=CUl9WQvAEYnyx4D5eGeDzyK48bCsd2kvwNesmV03OZG/iXC3A8/phaih+eM376a1wKrvpNdQitjglDRu3dS1Vg30fiY6jQeVZshCCB2VPwrB8WSRwC10T67msejw3c3tCxxOwbuKpDKLVtlQc4EdVoDiUMn7tOtoDfrEMaQH/lXxNDsXPPlKVIMpwafof7C9GW3iavZIsrx4QG1gkXrjEwlx6u/8nQZBwt/8DxsF++tW8zVQ3qwb+mDDzpgpdYrskgtQbR/q4UQgkm0KjTkccYypQqNp4O1HkQm2/+uCNjQ/qYLeDCWk6CPeWLoiqYmF/jT1Fw0FAjGa97yQ7RYc9Q== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by DM4PR12MB6183.namprd12.prod.outlook.com (2603:10b6:8:a7::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun 2024 00:55:41 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::953f:2f80:90c5:67fe%6]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024 00:55:41 +0000 From: Alistair Popple To: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca Cc: catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com, Alistair Popple Subject: [PATCH 13/13] mm: Remove devmap related functions and page table bits Date: Thu, 27 Jun 2024 10:54:28 +1000 Message-ID: <47c26640cd85f3db2e0a2796047199bb984d1b3f.1719386613.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0115.ausprd01.prod.outlook.com (2603:10c6:10:1b8::6) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|DM4PR12MB6183:EE_ X-MS-Office365-Filtering-Correlation-Id: e61c3569-6231-4122-b483-08dc9643ddfc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: gcuj/dcZIVIm7pufHEHrXVCwIWjtb8+LmJe84qiB3okLaV54C/aTE27YUjw+ggsnpi2rMPY+BV22PiQqPcZlPW8jp4WeFycwEnZu/BMD3S8pz+q7+5X8ohCwS0YcuMnYeU0JwNwpR4sqCLoBoUefbrSv3E8MiJYXKFOtpc90aWCV4BluvYTAkAZxtfWtf7rbS0d4Jcz/lpv6hxqUb2K34rNWqdVKMC5KQ1khdlNYU51z2kuIeOYZpPIj+ffixOu/NSeVyNU2tm39JUjkJOS9bX7FyusP2cgHl4MoCytCMdSJmD6o4P81Kz14PdrZAkFtI7O/UgPs8axK5tlWN28eT0QQHrTVLffluoMI7u7LLsm7YdxD/jz+JOI1Hv62X5KjD5hD3Prm9KNhsB48GexShuUXqorQ082utV+GKqxLL5MqMnBwIslAYmyBLQg9rAUGLe7sT8KDE8CnbpIehGsUBFa3h2EAyuMu5IbvSp4OedqTc08NIu6P3iDz7MiQ761vkt4aCX3NYK/m3EZDkgCcwUj4XHYNvDSbhmQRvXpnZL1kbJGQQ6H+eHtIEsas+7RLb9fybP2LDF4D29oQAiMf2QNf3z78DuCcN27lZV8Bg7n3sK8VNuGox0mJp/LBNExGI4moWlbT11lmYy7MMeCiHARG12tE5Ffm7c6i5SSCfRRfIjQMpW6QWTa1gDkZAgOMA3uMPAcOLA1/7lqUJuCx9g1lrc9sNwFgKCBVIEvZiXC7lda+yhP7yXFj4YVnrQhrJ2fWYsNVEU2gzPX86OxFWAsQfSAaOqiA+g0zLb8wUxYoddf5Ta8Le9KCyOZ0ajWNGAqErRuQdz/xdo7LghOWtc25adlBw9hlHUJkSKsCjsrFK3CqfkqJnNyuO9XRrj6Lyu27XURJrrP53I5N6JX0Qyi1Jk883gzZnav2S4dG+n0MFeoz3c9YrCv0SGUzKMsDw8p8eBzj+Eyo1TMwrfCAUHDCLwS1owALpYXAjQY2L1jya342ZWMcvwTZ+HlrTrnTkqUpodhCLorE310tzW87n7NPx7CEvqicPgGmVcMmVoORWcgMQVDez1CYahx5VE8urCfG+jmdqKWVyC0Tw4b+3l0Vux8zu5PjJMW3FG/IyfzfsDclCNdKEAE09Zwl/yJhd5S82tLZu6kFke3zquSPLgVPPfNRFNTIOJUIy7RiAEERdsp/J2wPYdg0TGd58jW36J/fttEEpEM7TITTQ6DObBHuhI1kAiUDreC1DvFKgxIADum7OHYB/f4rExoTn2lrrg2w7HL1ksRwMkSX+DGN7tW8yqSibt0YEKjZ9C0DkN5+FQ+WlJDrKiXUKboyFvVvxyHdE7KDb84zqvv5TxKYYg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 63P02lzJ3C7XDGLFMHExGnMRTpklBrn7vR5lnRpDAQzBa1G3qlroYZIIkFgbciLweuev05QEHh/ZQSS1ZDVxkgJJIo+Sgmd9lDw3G5f91nsOYtGve4MUYle9Rm8JZSFdUQ3Il5Ns8JQiIoCzq3tFWVyTEzoScaIvq6qWCY/+1lqo+K6D4OstyaH/66ygeQc9I+wOD0ddSIBEXeVFrxOTPTjLxxloEM7vCfUvpGnVSCdflT1iAptDkJYEnT1m6ewXlZXD3Ei7TI4RkG5bYbZ0c+dvQ5gzT7K4oHgZ0xapTOz5blSlTKL1aq4BKfYDvdHlRS3bdrx6182yMFWeCwYjZEnhfZSLT2IJUOL/CXPNQn4uijr8hNs6GJ2jpQ+/DT3CpqTGFZwZEjpebsywj0nAaa7YlxJpco/Phr0J0jpG8iJrWEoh3nLrdUULDE84ibiDRmpSuXauSm0hqPe25ga14HNaRLL9tW6kPywSE7IT8pf3ojsy65aPgHmgeP7tHUzAMOz4NR2ZKh00OtvbDIPknKCFWQ8Sm8Vmn3SpLO4/SySigyy1XSbuQniciRC9sZMF9yHl/xLSKiHusiSxnhXx2ftATg8zeP+D8CSo5kQogN3R5Y+sesAWKNZcROMaLOPFC1do4KnHihZbYg3ops6Vxkfb0v9SLh0m8NL6WPZ46grEVORWGPOSL4Wp5wFhkvoX+XSOBBVXv1t/iNdqFpud2u0xK8zVyh2z7liAYFlsf9WpZyJHBfF0Lqp1+/jP11P9NE1wPQ5R8XpTFLw1AAAk0HVCq+XEa3j9A3RfgYYS/OPbHEtzoAlehLKTaHQb2Hf/ySKAwadDtbfDqemWmYWa0Ewd4rW0atqDpoi9/puIpgFLqaZrTpCqXZA6G1mc6UGNcFyUUrDdxZGaWgqfy7cOzqUtzomrgNNb2W8R0o9OHDOGJkYLdSdtfSmU1EcPHzMoXF+u1I0gfXa1cvINiZEY/OsbzJcWnteh/vXXd8ZawbrbBjYpJW74SfK+GNWl+skIhYGWHTqdBnQKfltXgO6zc6Vet8IL0tUDrILqYCqIWG0a4W7DWwob62ZIzcfpv32BWIgdnRUMYPk/evdOwOPJ8tt/q6/dyqkKEs284yEiP+C5irj/stFPjiEZAGNJOEhvaiRoiqF8hK5pOE9ypMjZt2FPtl7PWoGUatJ2AmngoS1gV/u1ruzdZIqgR6Gdh88+QD3X477tizOE84mZgMip+Ev2OsdfqJ/H3rcWiE3BwJQgWiokr1Df07NEj8ThpoYq4CNo9uSBVmg57vRlFcmm5nt0K8L7g9jW9jBz5BNdSHM5ktDmeZ/KUk1mctdZ2pU0SV+3oX2ZyKLgX5YWVzKBFB6fucY4mY5F3QPyp7TcqXfihXJclrcx+CBxK/EkomlNIb5AI+XD+pCZN+EhsaYvJhHXjB0CbfiHsRgRBhmEoUOKsb7Ow/GGR0fx5U5eGvS12YMcCrfuVbS815X5VBG18jTk7oB8vjUUFm9hL1eYmryWGtySaMihpUbbB1zoR15fHVybye8vHnPmgFfymWOEvYPhPCYt0Sdke4scm6w78m7aLjUasHoNW9ie/S+XBPmZ X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: e61c3569-6231-4122-b483-08dc9643ddfc X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 00:55:41.4429 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Ee1AbEbBdlPJoef3hvMR6VjCUkSa/2RMO545YVY+Jh1FfliAqRyrLFnociVMSjsb6G1GTKuluzB1Q4YI/PF6YQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6183 Now that DAX and all other reference counts to ZONE_DEVICE pages are managed normally there is no need for the special devmap PTE/PMD/PUD page table bits. So drop all references to these, freeing up a software defined page table bit on architectures supporting it. Signed-off-by: Alistair Popple Acked-by: Will Deacon # arm64 --- Documentation/mm/arch_pgtable_helpers.rst | 6 +-- arch/arm64/Kconfig | 1 +- arch/arm64/include/asm/pgtable-prot.h | 1 +- arch/arm64/include/asm/pgtable.h | 24 +-------- arch/powerpc/Kconfig | 1 +- arch/powerpc/include/asm/book3s/64/hash-4k.h | 6 +-- arch/powerpc/include/asm/book3s/64/hash-64k.h | 7 +-- arch/powerpc/include/asm/book3s/64/pgtable.h | 52 +------------------ arch/powerpc/include/asm/book3s/64/radix.h | 14 +----- arch/x86/Kconfig | 1 +- arch/x86/include/asm/pgtable.h | 50 +----------------- arch/x86/include/asm/pgtable_types.h | 5 +-- include/linux/mm.h | 7 +-- include/linux/pfn_t.h | 20 +------- include/linux/pgtable.h | 19 +------ mm/Kconfig | 4 +- mm/debug_vm_pgtable.c | 59 +-------------------- mm/hmm.c | 3 +- 18 files changed, 11 insertions(+), 269 deletions(-) diff --git a/Documentation/mm/arch_pgtable_helpers.rst b/Documentation/mm/arch_pgtable_helpers.rst index ad50ca6..9230bc7 100644 --- a/Documentation/mm/arch_pgtable_helpers.rst +++ b/Documentation/mm/arch_pgtable_helpers.rst @@ -30,8 +30,6 @@ PTE Page Table Helpers +---------------------------+--------------------------------------------------+ | pte_protnone | Tests a PROT_NONE PTE | +---------------------------+--------------------------------------------------+ -| pte_devmap | Tests a ZONE_DEVICE mapped PTE | -+---------------------------+--------------------------------------------------+ | pte_soft_dirty | Tests a soft dirty PTE | +---------------------------+--------------------------------------------------+ | pte_swp_soft_dirty | Tests a soft dirty swapped PTE | @@ -106,8 +104,6 @@ PMD Page Table Helpers +---------------------------+--------------------------------------------------+ | pmd_protnone | Tests a PROT_NONE PMD | +---------------------------+--------------------------------------------------+ -| pmd_devmap | Tests a ZONE_DEVICE mapped PMD | -+---------------------------+--------------------------------------------------+ | pmd_soft_dirty | Tests a soft dirty PMD | +---------------------------+--------------------------------------------------+ | pmd_swp_soft_dirty | Tests a soft dirty swapped PMD | @@ -181,8 +177,6 @@ PUD Page Table Helpers +---------------------------+--------------------------------------------------+ | pud_write | Tests a writable PUD | +---------------------------+--------------------------------------------------+ -| pud_devmap | Tests a ZONE_DEVICE mapped PUD | -+---------------------------+--------------------------------------------------+ | pud_mkyoung | Creates a young PUD | +---------------------------+--------------------------------------------------+ | pud_mkold | Creates an old PUD | diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5d91259..beb8c3c 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -35,7 +35,6 @@ config ARM64 select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE - select ARCH_HAS_PTE_DEVMAP select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_HW_PTE_YOUNG select ARCH_HAS_SETUP_DMA_OPS diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index b11cfb9..043b102 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -17,7 +17,6 @@ #define PTE_SWP_EXCLUSIVE (_AT(pteval_t, 1) << 2) /* only for swp ptes */ #define PTE_DIRTY (_AT(pteval_t, 1) << 55) #define PTE_SPECIAL (_AT(pteval_t, 1) << 56) -#define PTE_DEVMAP (_AT(pteval_t, 1) << 57) /* * PTE_PRESENT_INVALID=1 & PTE_VALID=0 indicates that the pte's fields should be diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index f8efbc1..9193537 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -107,7 +107,6 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) #define pte_user(pte) (!!(pte_val(pte) & PTE_USER)) #define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN)) #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT)) -#define pte_devmap(pte) (!!(pte_val(pte) & PTE_DEVMAP)) #define pte_tagged(pte) ((pte_val(pte) & PTE_ATTRINDX_MASK) == \ PTE_ATTRINDX(MT_NORMAL_TAGGED)) @@ -269,11 +268,6 @@ static inline pmd_t pmd_mkcont(pmd_t pmd) return __pmd(pmd_val(pmd) | PMD_SECT_CONT); } -static inline pte_t pte_mkdevmap(pte_t pte) -{ - return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL)); -} - #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP static inline int pte_uffd_wp(pte_t pte) { @@ -569,14 +563,6 @@ static inline int pmd_trans_huge(pmd_t pmd) #define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#define pmd_devmap(pmd) pte_devmap(pmd_pte(pmd)) -#endif -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - return pte_pmd(set_pte_bit(pmd_pte(pmd), __pgprot(PTE_DEVMAP))); -} - #define __pmd_to_phys(pmd) __pte_to_phys(pmd_pte(pmd)) #define __phys_to_pmd_val(phys) __phys_to_pte_val(phys) #define pmd_pfn(pmd) ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT) @@ -1114,16 +1100,6 @@ static inline int pmdp_set_access_flags(struct vm_area_struct *vma, return __ptep_set_access_flags(vma, address, (pte_t *)pmdp, pmd_pte(entry), dirty); } - -static inline int pud_devmap(pud_t pud) -{ - return 0; -} - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} #endif #ifdef CONFIG_PAGE_TABLE_CHECK diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index c88c6d4..c56a078 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -145,7 +145,6 @@ config PPC select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PHYS_TO_DMA select ARCH_HAS_PMEM_API - select ARCH_HAS_PTE_DEVMAP if PPC_BOOK3S_64 select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 select ARCH_HAS_SET_MEMORY diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index 6472b08..51d868c 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -155,12 +155,6 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm, extern int hash__has_transparent_hugepage(void); #endif -static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd) -{ - BUG(); - return pmd; -} - #endif /* !__ASSEMBLY__ */ #endif /* _ASM_POWERPC_BOOK3S_64_HASH_4K_H */ diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h index 0bf6fd0..0fb5b7d 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -259,7 +259,7 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array, */ static inline int hash__pmd_trans_huge(pmd_t pmd) { - return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)) == + return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) == (_PAGE_PTE | H_PAGE_THP_HUGE)); } @@ -281,11 +281,6 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm, extern int hash__has_transparent_hugepage(void); #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd) -{ - return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)); -} - #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_BOOK3S_64_HASH_64K_H */ diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 8f9432e..a48ad02 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -88,7 +88,6 @@ #define _PAGE_SOFT_DIRTY _RPAGE_SW3 /* software: software dirty tracking */ #define _PAGE_SPECIAL _RPAGE_SW2 /* software: special page */ -#define _PAGE_DEVMAP _RPAGE_SW1 /* software: ZONE_DEVICE page */ /* * Drivers request for cache inhibited pte mapping using _PAGE_NO_CACHE @@ -109,7 +108,7 @@ */ #define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ _PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \ - _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) + _PAGE_SOFT_DIRTY) /* * user access blocked by key */ @@ -123,7 +122,7 @@ */ #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \ - _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) + _PAGE_SOFT_DIRTY) /* * We define 2 sets of base prot bits, one for basic pages (ie, @@ -619,24 +618,6 @@ static inline pte_t pte_mkhuge(pte_t pte) return pte; } -static inline pte_t pte_mkdevmap(pte_t pte) -{ - return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_SPECIAL | _PAGE_DEVMAP)); -} - -/* - * This is potentially called with a pmd as the argument, in which case it's not - * safe to check _PAGE_DEVMAP unless we also confirm that _PAGE_PTE is set. - * That's because the bit we use for _PAGE_DEVMAP is not reserved for software - * use in page directory entries (ie. non-ptes). - */ -static inline int pte_devmap(pte_t pte) -{ - __be64 mask = cpu_to_be64(_PAGE_DEVMAP | _PAGE_PTE); - - return (pte_raw(pte) & mask) == mask; -} - static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { /* FIXME!! check whether this need to be a conditional */ @@ -1387,35 +1368,6 @@ static inline bool arch_needs_pgtable_deposit(void) } extern void serialize_against_pte_lookup(struct mm_struct *mm); - -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - if (radix_enabled()) - return radix__pmd_mkdevmap(pmd); - return hash__pmd_mkdevmap(pmd); -} - -static inline pud_t pud_mkdevmap(pud_t pud) -{ - if (radix_enabled()) - return radix__pud_mkdevmap(pud); - BUG(); - return pud; -} - -static inline int pmd_devmap(pmd_t pmd) -{ - return pte_devmap(pmd_pte(pmd)); -} - -static inline int pud_devmap(pud_t pud) -{ - return pte_devmap(pud_pte(pud)); -} - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h index 8f55ff7..df23a82 100644 --- a/arch/powerpc/include/asm/book3s/64/radix.h +++ b/arch/powerpc/include/asm/book3s/64/radix.h @@ -264,7 +264,7 @@ static inline int radix__p4d_bad(p4d_t p4d) static inline int radix__pmd_trans_huge(pmd_t pmd) { - return (pmd_val(pmd) & (_PAGE_PTE | _PAGE_DEVMAP)) == _PAGE_PTE; + return (pmd_val(pmd) & _PAGE_PTE) == _PAGE_PTE; } static inline pmd_t radix__pmd_mkhuge(pmd_t pmd) @@ -274,7 +274,7 @@ static inline pmd_t radix__pmd_mkhuge(pmd_t pmd) static inline int radix__pud_trans_huge(pud_t pud) { - return (pud_val(pud) & (_PAGE_PTE | _PAGE_DEVMAP)) == _PAGE_PTE; + return (pud_val(pud) & _PAGE_PTE) == _PAGE_PTE; } static inline pud_t radix__pud_mkhuge(pud_t pud) @@ -315,16 +315,6 @@ static inline int radix__has_transparent_pud_hugepage(void) } #endif -static inline pmd_t radix__pmd_mkdevmap(pmd_t pmd) -{ - return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP)); -} - -static inline pud_t radix__pud_mkdevmap(pud_t pud) -{ - return __pud(pud_val(pud) | (_PAGE_PTE | _PAGE_DEVMAP)); -} - struct vmem_altmap; struct dev_pagemap; extern int __meminit radix__vmemmap_create_mapping(unsigned long start, diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 1d7122a..8702076 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -91,7 +91,6 @@ config X86 select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PMEM_API if X86_64 - select ARCH_HAS_PTE_DEVMAP if X86_64 select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_HW_PTE_YOUNG select ARCH_HAS_NONLEAF_PMD_YOUNG if PGTABLE_LEVELS > 2 diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 65b8e5b..5220f5a 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -268,16 +268,15 @@ static inline bool pmd_leaf(pmd_t pte) } #ifdef CONFIG_TRANSPARENT_HUGEPAGE -/* NOTE: when predicate huge page, consider also pmd_devmap, or use pmd_leaf */ static inline int pmd_trans_huge(pmd_t pmd) { - return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE; + return (pmd_val(pmd) & _PAGE_PSE) == _PAGE_PSE; } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static inline int pud_trans_huge(pud_t pud) { - return (pud_val(pud) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE; + return (pud_val(pud) & _PAGE_PSE) == _PAGE_PSE; } #endif @@ -287,29 +286,6 @@ static inline int has_transparent_hugepage(void) return boot_cpu_has(X86_FEATURE_PSE); } -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pmd_devmap(pmd_t pmd) -{ - return !!(pmd_val(pmd) & _PAGE_DEVMAP); -} - -#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static inline int pud_devmap(pud_t pud) -{ - return !!(pud_val(pud) & _PAGE_DEVMAP); -} -#else -static inline int pud_devmap(pud_t pud) -{ - return 0; -} -#endif - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} -#endif #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline pte_t pte_set_flags(pte_t pte, pteval_t set) @@ -470,11 +446,6 @@ static inline pte_t pte_mkspecial(pte_t pte) return pte_set_flags(pte, _PAGE_SPECIAL); } -static inline pte_t pte_mkdevmap(pte_t pte) -{ - return pte_set_flags(pte, _PAGE_SPECIAL|_PAGE_DEVMAP); -} - static inline pmd_t pmd_set_flags(pmd_t pmd, pmdval_t set) { pmdval_t v = native_pmd_val(pmd); @@ -560,11 +531,6 @@ static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd) return pmd_set_flags(pmd, _PAGE_DIRTY); } -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - return pmd_set_flags(pmd, _PAGE_DEVMAP); -} - static inline pmd_t pmd_mkhuge(pmd_t pmd) { return pmd_set_flags(pmd, _PAGE_PSE); @@ -644,11 +610,6 @@ static inline pud_t pud_mkdirty(pud_t pud) return pud_mksaveddirty(pud); } -static inline pud_t pud_mkdevmap(pud_t pud) -{ - return pud_set_flags(pud, _PAGE_DEVMAP); -} - static inline pud_t pud_mkhuge(pud_t pud) { return pud_set_flags(pud, _PAGE_PSE); @@ -953,13 +914,6 @@ static inline int pte_present(pte_t a) return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE); } -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pte_devmap(pte_t a) -{ - return (pte_flags(a) & _PAGE_DEVMAP) == _PAGE_DEVMAP; -} -#endif - #define pte_accessible pte_accessible static inline bool pte_accessible(struct mm_struct *mm, pte_t a) { diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index b786449..1885ac2 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -33,7 +33,6 @@ #define _PAGE_BIT_CPA_TEST _PAGE_BIT_SOFTW1 #define _PAGE_BIT_UFFD_WP _PAGE_BIT_SOFTW2 /* userfaultfd wrprotected */ #define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking */ -#define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4 #ifdef CONFIG_X86_64 #define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW5 /* Saved Dirty bit */ @@ -117,11 +116,9 @@ #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE) #define _PAGE_NX (_AT(pteval_t, 1) << _PAGE_BIT_NX) -#define _PAGE_DEVMAP (_AT(u64, 1) << _PAGE_BIT_DEVMAP) #define _PAGE_SOFTW4 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW4) #else #define _PAGE_NX (_AT(pteval_t, 0)) -#define _PAGE_DEVMAP (_AT(pteval_t, 0)) #define _PAGE_SOFTW4 (_AT(pteval_t, 0)) #endif @@ -148,7 +145,7 @@ #define _COMMON_PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ _PAGE_SPECIAL | _PAGE_ACCESSED | \ _PAGE_DIRTY_BITS | _PAGE_SOFT_DIRTY | \ - _PAGE_DEVMAP | _PAGE_CC | _PAGE_UFFD_WP) + _PAGE_CC | _PAGE_UFFD_WP) #define _PAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PAT) #define _HPAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_PAT_LARGE) diff --git a/include/linux/mm.h b/include/linux/mm.h index 47d8923..5e9b754 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2709,13 +2709,6 @@ static inline pte_t pte_mkspecial(pte_t pte) } #endif -#ifndef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pte_devmap(pte_t pte) -{ - return 0; -} -#endif - extern pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, spinlock_t **ptl); static inline pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, diff --git a/include/linux/pfn_t.h b/include/linux/pfn_t.h index 2d91482..0100ad8 100644 --- a/include/linux/pfn_t.h +++ b/include/linux/pfn_t.h @@ -97,26 +97,6 @@ static inline pud_t pfn_t_pud(pfn_t pfn, pgprot_t pgprot) #endif #endif -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline bool pfn_t_devmap(pfn_t pfn) -{ - const u64 flags = PFN_DEV|PFN_MAP; - - return (pfn.val & flags) == flags; -} -#else -static inline bool pfn_t_devmap(pfn_t pfn) -{ - return false; -} -pte_t pte_mkdevmap(pte_t pte); -pmd_t pmd_mkdevmap(pmd_t pmd); -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ - defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) -pud_t pud_mkdevmap(pud_t pud); -#endif -#endif /* CONFIG_ARCH_HAS_PTE_DEVMAP */ - #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL static inline bool pfn_t_special(pfn_t pfn) { diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 91e06bb..2fa40c8 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1591,21 +1591,6 @@ static inline int pud_write(pud_t pud) } #endif /* pud_write */ -#if !defined(CONFIG_ARCH_HAS_PTE_DEVMAP) || !defined(CONFIG_TRANSPARENT_HUGEPAGE) -static inline int pmd_devmap(pmd_t pmd) -{ - return 0; -} -static inline int pud_devmap(pud_t pud) -{ - return 0; -} -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} -#endif - #if !defined(CONFIG_TRANSPARENT_HUGEPAGE) || \ !defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) static inline int pud_trans_huge(pud_t pud) @@ -1860,8 +1845,8 @@ typedef unsigned int pgtbl_mod_mask; * - It should contain a huge PFN, which points to a huge page larger than * PAGE_SIZE of the platform. The PFN format isn't important here. * - * - It should cover all kinds of huge mappings (e.g., pXd_trans_huge(), - * pXd_devmap(), or hugetlb mappings). + * - It should cover all kinds of huge mappings (i.e. pXd_trans_huge() + * or hugetlb mappings). */ #ifndef pgd_leaf #define pgd_leaf(x) false diff --git a/mm/Kconfig b/mm/Kconfig index b4cb452..832a320 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -995,9 +995,6 @@ config ARCH_HAS_CURRENT_STACK_POINTER register alias named "current_stack_pointer", this config can be selected. -config ARCH_HAS_PTE_DEVMAP - bool - config ARCH_HAS_ZONE_DMA_SET bool @@ -1015,7 +1012,6 @@ config ZONE_DEVICE depends on MEMORY_HOTPLUG depends on MEMORY_HOTREMOVE depends on SPARSEMEM_VMEMMAP - depends on ARCH_HAS_PTE_DEVMAP select XARRAY_MULTI help diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index e4969fb..1262148 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -348,12 +348,6 @@ static void __init pud_advanced_tests(struct pgtable_debug_args *args) vaddr &= HPAGE_PUD_MASK; pud = pfn_pud(args->pud_pfn, args->page_prot); - /* - * Some architectures have debug checks to make sure - * huge pud mapping are only found with devmap entries - * For now test with only devmap entries. - */ - pud = pud_mkdevmap(pud); set_pud_at(args->mm, vaddr, args->pudp, pud); flush_dcache_page(page); pudp_set_wrprotect(args->mm, vaddr, args->pudp); @@ -366,7 +360,6 @@ static void __init pud_advanced_tests(struct pgtable_debug_args *args) WARN_ON(!pud_none(pud)); #endif /* __PAGETABLE_PMD_FOLDED */ pud = pfn_pud(args->pud_pfn, args->page_prot); - pud = pud_mkdevmap(pud); pud = pud_wrprotect(pud); pud = pud_mkclean(pud); set_pud_at(args->mm, vaddr, args->pudp, pud); @@ -384,7 +377,6 @@ static void __init pud_advanced_tests(struct pgtable_debug_args *args) #endif /* __PAGETABLE_PMD_FOLDED */ pud = pfn_pud(args->pud_pfn, args->page_prot); - pud = pud_mkdevmap(pud); pud = pud_mkyoung(pud); set_pud_at(args->mm, vaddr, args->pudp, pud); flush_dcache_page(page); @@ -693,53 +685,6 @@ static void __init pmd_protnone_tests(struct pgtable_debug_args *args) static void __init pmd_protnone_tests(struct pgtable_debug_args *args) { } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static void __init pte_devmap_tests(struct pgtable_debug_args *args) -{ - pte_t pte = pfn_pte(args->fixed_pte_pfn, args->page_prot); - - pr_debug("Validating PTE devmap\n"); - WARN_ON(!pte_devmap(pte_mkdevmap(pte))); -} - -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -static void __init pmd_devmap_tests(struct pgtable_debug_args *args) -{ - pmd_t pmd; - - if (!has_transparent_hugepage()) - return; - - pr_debug("Validating PMD devmap\n"); - pmd = pfn_pmd(args->fixed_pmd_pfn, args->page_prot); - WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd))); -} - -#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static void __init pud_devmap_tests(struct pgtable_debug_args *args) -{ - pud_t pud; - - if (!has_transparent_pud_hugepage()) - return; - - pr_debug("Validating PUD devmap\n"); - pud = pfn_pud(args->fixed_pud_pfn, args->page_prot); - WARN_ON(!pud_devmap(pud_mkdevmap(pud))); -} -#else /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -static void __init pud_devmap_tests(struct pgtable_debug_args *args) { } -#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -#else /* CONFIG_TRANSPARENT_HUGEPAGE */ -static void __init pmd_devmap_tests(struct pgtable_debug_args *args) { } -static void __init pud_devmap_tests(struct pgtable_debug_args *args) { } -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -#else -static void __init pte_devmap_tests(struct pgtable_debug_args *args) { } -static void __init pmd_devmap_tests(struct pgtable_debug_args *args) { } -static void __init pud_devmap_tests(struct pgtable_debug_args *args) { } -#endif /* CONFIG_ARCH_HAS_PTE_DEVMAP */ - static void __init pte_soft_dirty_tests(struct pgtable_debug_args *args) { pte_t pte = pfn_pte(args->fixed_pte_pfn, args->page_prot); @@ -1341,10 +1286,6 @@ static int __init debug_vm_pgtable(void) pte_protnone_tests(&args); pmd_protnone_tests(&args); - pte_devmap_tests(&args); - pmd_devmap_tests(&args); - pud_devmap_tests(&args); - pte_soft_dirty_tests(&args); pmd_soft_dirty_tests(&args); pte_swap_soft_dirty_tests(&args); diff --git a/mm/hmm.c b/mm/hmm.c index 7f78b0b..fa442b4 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -395,8 +395,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return 0; } -#if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && \ - defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) +#if defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range, pud_t pud) {