From patchwork Thu Apr 11 00:57:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCCFDCD11C2 for ; Thu, 11 Apr 2024 00:57:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 50C956B0082; Wed, 10 Apr 2024 20:57:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 49A026B0087; Wed, 10 Apr 2024 20:57:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 30F4C6B0088; Wed, 10 Apr 2024 20:57:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 138696B0082 for ; Wed, 10 Apr 2024 20:57:52 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9D5E5120725 for ; Thu, 11 Apr 2024 00:57:51 +0000 (UTC) X-FDA: 81995438742.26.2211FD5 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2134.outbound.protection.outlook.com [40.107.223.134]) by imf10.hostedemail.com (Postfix) with ESMTP id 02F46C0006 for ; Thu, 11 Apr 2024 00:57:48 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=Hh97ttS8; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf10.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.134 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797069; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tKiN3shwTxUYVqfDKXnFvse1zmk7VaIAWgqrFv/wtSQ=; b=sATD08Pr6q2dvPTD1o47v632XS3qpMb5+1byySLZiMbOpDGTJ8QGdv01pJoIcxV38isPGI 9xIu9k0dY3gEjz1LgixXqm6nEayregoPXry0W66CrDGdoGsaBp/x6uDr5Hnrv1kWi7S+mo NTZYA5tN7Syi4VytYnjWmauZ+K4T0B8= ARC-Authentication-Results: i=2; imf10.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=Hh97ttS8; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf10.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.134 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797069; a=rsa-sha256; cv=pass; b=JReHXrq4vaW8ohdGZHi8DDQTyP+qsNQEdBbgPYcV3+blXkQ8C7Xet2H1TLIM4h5DK2be3f UAUs1YHuVFQhGR/fz78lIy2FM5thj3Cbk7kfvTrBHLfpc+f9tH8LhTY9i6jiJ5mRpnK23z AA5SRI9PC1T4nHvyzgmfOcM0qXJeho8= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HY5sTlIR5CpSNsBjedXbQPPwVjBMKnMgBXa1hsJjAziLhZhHiJ80+/E9PztM1PwBhhgBC9q5MrC0og9BDrQaO7ZUorwzUKuxKrLFthhR8LCRdphYMfQqPySEpLIW/DMQRrpr0+M+2GTTuMhoJmBurgrOs3g2ygTwTmN1m8onnsOZvVO8P0mEk/RDJuuYYRSXcrRZrnphWzUnyNaWu5ijj2Lgr2n5WsM9cpaxEu6zTq1IXsYocq0sCjQb/oakmtskOuvZptTOHf6oRDDGIri4fc6sV+jb9RAS0dnHPyyrCNrV8ktdiS7NWZY8if1gZ627oq4f7t8vjkNJa/xVtUrDEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tKiN3shwTxUYVqfDKXnFvse1zmk7VaIAWgqrFv/wtSQ=; b=L/cM3GXLbBIFaCNs0IxElReAYjRg5d0Yn6+Udx7Y84ZLb+RqSrZB6ACMgxL3nP3KhNf+oMR8NXR+BjPyB6vMfa0BQYO9bPQxnSaMvhUtP7PpantVP4O12+4XCpI3q3XThvJHB2dWCO/HUI7wvB4pGbAODKG4ndU5Kqv+1ochx+oYYVeHzayx6UKocTqe8W6GX7J6qqBLKX5BH5k7+VBbmDnBvS74spMGl0RrV9WVIy6I+6J3PtSsWuH3RowJMihsV7exMWzvNS+fnOPpizbGpTfJxBh3JmBYNbTIxUqebYixuNPKFQA1Uzsm3gGcJYnROH86JnA/Ru8PXCqY/auBcA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tKiN3shwTxUYVqfDKXnFvse1zmk7VaIAWgqrFv/wtSQ=; b=Hh97ttS80gmSlfXtTA0UIwxmkMM9iC8h0lfM3JI7NLlcGNkQ9Y+LSbaHZnNn6rcKJa7q6hU4wZ5QZ2jLFcLp/fo5xtZWg4IWLMJ5p3ZMZuD3JdzBGJB7XDDex4sikbMSKGp8TPtRGXVb7S28POZi9AJpHfeBPJQYBXE/uob7db0uCl9nl/+cssetzhF7tR1e8l5NvGNuq7YpkXIaxPVwHvw67sp6aFG1gWfWT2fyOHezJ5U2ktb4nE5iQOdKLXdY2efU8IOReAsISWHFFh8PwLBxYv2FkzvQH6mSHaf7UHVOA0hDJk2k9kB7NwkRF0wwS6y5SB9Clq2ERXO8vXa3/A== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SN7PR12MB7854.namprd12.prod.outlook.com (2603:10b6:806:32b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:57:45 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:57:45 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 01/10] mm/gup.c: Remove redundant check for PCI P2PDMA page Date: Thu, 11 Apr 2024 10:57:22 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SYCPR01CA0013.ausprd01.prod.outlook.com (2603:10c6:10:31::25) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SN7PR12MB7854:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: yRXac+g063Y2zXlo/eBjC+LLXWm8LH67J5JZq/vIksk2PDvAs8uiPYq+zvIOk2iQhdQs+OF4ilPKXafE7EzkhR7Vf5bB13h3kmLyt9xqYYpPuUshJBgxFSqcWim6s8d7W6TMUxyK4TjppkxoLNiyWrBgQ30abh3iOoYhePI9a/WT6JdaWDIPLBTJ3/iIYDUNATbCfLgnAvUCistO2oXf5jkMIfQN7TGJYzp7qxm0HfwcnSNo8YOApRJT2p+heBL5wviQsGeI5FIOtHxIpltBNHYVaaZNqL6YH9kobqbbRO0aN2JDEroNfzAi9KwzydtkmA4l144NZSz+4VKVsl63RH/flT6UZFtf+fm3EpcrFW0oRnwF0FRnu3ViG1/YKmSweqrotU6pJkvy+5JFLOXU26lDAs/gd6iouvvWVUydsgp7PpXIkAAa3GGL3EmZTtYni8/n/JqP0CYqD4ey9Xe+bmsfVeHCkSWxbdEwFPA0tbk2CfiAThZKypGTvUdF0zMTqDm4iBZXgH8CzNbpPYpGX1L1NfWUqVPkNfNT6c35a6elTMInWTsvhBzPSW9me2IgLdwplXnqp0W2au0DM6z5p/xAFhqGd/bN5tk3MIifv27hkY+PFRsP94eCZ5EsALaclHH/Km3VnxoO5GN7ENfbI/x2CMhyBRdNRwlwnnBu8OE= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 1qlMQ2Dt5t8CkWQqMeXuqN3pGNu1Rx6VN8pKvEDMXdN2gO2UpNqT+rAMfcmz1n5WHpwy4MT1PgcdVhEirUwJ4y9wY63c/SIafADkeEO9LiCe84zv0P1OPvWN+LTQqKJXSDpzLCYQvyhpGKKtU5nHHXksz+zxlB+0L2UYjqAKxQS3x5q9B7EP8Ke/13gGjploSTrTpy7zOPXRGZGP7qE6EdBGlHDB++RMUR0vzIuS5a2N8eb21Y36z5+kd+lMeHNww7miGtzVKZ7B44ryx+2akP0m4UAi+1vJCqLN156kobSInFyxS2Ieth3+pTAqkDUqQCIjMsYoqmEcQRkKiLIV+phY/IClRl0AL/RbUbAQmNSUkRYowoY/Re/gUfUslm/jLYJi9FdT4gvxdxj6gov8e8snmWTATUOV25hCZv1KmrwMpukPvzeq9wGn07Yugi6M8rL/kLEogxhXr8//w3Eg+OW6m5Fw3qnkxtltHAg+kkuRgl8ZLVBib32IY4Z2OAMFBBSTo0kS8IT6fbfd34/vWRQpPGiuYZTzdIUTo2HUWH070LUiun9HZrawcsduwacu9OmYn3/XvsKEa9qdhCWqw16mpLlNmUqfVMufd6zROqGJJvCchuj2gb2sMJxF4hx99kHvjwMGF9eXxEo3l5h97bE1faResABolnP5sAWsLf2sN6rPKsO6QHks/0VVFwMfIYpbsRpwOVcBlEX0QYJf0CJt6PYVTGbxtijPFSFCA3QZo+ej/jx9ejOUOs2+Xxihi0zWSiUWn5hz19AiSSwbK3iOD7v41+UywEEcX56cKvgS3Akva0TP0I5N6MDWbegL2cBxt05z3csvokCoMeV+qSJHArdy5IdXScmUTZuCGxvYKaKH7k28UcE5TbE3TJYbwUCzRuenQ9jOZndZIpbdCTKF4NgQORr1mK50Kt8x5otuu5+SE7ZNn3W7wvDmCpfJ1vzopFVDx0t5pP8lyTRAQndBgfYPZPYp+rrH1qjZt6n4lBGghF9egBRCiB3rePLO3mX8UlrhRR5FBsml44K2xrlValLF31rSO9pJ2cGfDpZ/7BZvCgKCfsoeDpcqoCtNwcPchoDbY6rP2pdelxVXgErzitGvQZE6gZwfhCdq8XXUvRL4ALfYuVz1Js7CdrkII6CuFsFHtcYxVPnPr6F3/iXGnvAfNWZv1rA/eUq33tuQx45iF9ZMmhrp8i5w887v47/a2zEaQ9JflTWg19jARuu2dwzM9RQGmRZkzI3684GldKSjD+umVsyQTPjKrUTZkyPjPFvmeJAWHAPYKhYJQGygeyLBMBrvG5gTKGWYSqBShfw/pXdCxbDgOMHF4+EN2o6cGih6+KfK5YComUBIMEBg/VMvSQ7ddhyO3p4r5wPkLSOCpMx5xuEjfDigFepedlorJ41UCxpI8/XkRVTZPyDhbIUcq4+lo8u5JUKxjNMaLkmmiczlPfQBWuhdDAMUvmiRdrnLeUZ/YN0JCWiy6XR+U0+3JVnREGqJ1LdDeJUF8BVQmmLoRg8ubi7fw6UKhVOYSO1hcAEDnYvJsIBCHBP7vUOiipqcIreLkSzKIRRJbY0O5cRNsaq1ZmUXS40m X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9bd81b5d-44cb-4688-6bb6-08dc59c2667c X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:57:45.8689 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: GMuLPEdFt6WPrxqnLuiqpHtIh5XcMbkDnKAVtd1oHgXXU4PYGPZonbbK06wYNIaBIgQpd//3xW250+75g+OPcg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7854 X-Rspam-User: X-Stat-Signature: 9c777yzap9bwtzs35q7pspi6h3cgsffe X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 02F46C0006 X-HE-Tag: 1712797068-581567 X-HE-Meta: U2FsdGVkX1+kV+A0OW7wmnE37Y3ZjQ2C2lpS1xVeD45s555e4xMaIj7V74O59Y+hO8rdbm6lg9oyXCkKfLzwA+OgK7b5K9g3KW6PUAGs6bzJvD32hPG5hvqw817XTOjvYbL8gxGYnoAfxIGp7OuhcQq3vCrnwNTN04fPR/pp4PmiJDdVs8ML5SLx//URsMWnm3ilRkKJCpXXDXCDoWh1GQD0mBnzx7F9PLaPon0hjdbm5ghMkBKNEiJX+eyy9+/6quXUOQunlT7/w68Ezy/czf7zKNVu8IaSDkkDRwkCrH3MMtlZU5NlRF31GOOsBkSEkgtFimMw4Vx/TNZHlhVvGPSwwUoUWcTnN9GvWbLC6o164jsDXs5DFU7mindvqtbn5b+dFJ5a/7J54ln673px7FLzjumPNSixDyMMGBhBc636w2liSZnVITYcFspW5wHNhloQb1SLuaj+G24T/AdzC2BSGixAjcsKtMigMFr+2wR5I/+xJC8eYyZ6TB6xqoO/F1igCZGWhqQ/7X7DoWKAiFXXc+rRZJA40ypRzNpB4Hb7Z4LaTEP/yvz/9leoRC5GT34Uo4/K0gXKV4DbsSeUORpJyHTOVZtTkUHGFaxXFUI61kQ5ZFI45GuklQteqNzIrv9hdT0tS+uF/zZQjBgsmtwghPPGZwAnDSzbAMiITiY5ORZOBrBZ26frhI4WN8UbNCX8NclwnI/fZQmTXId/JZf70ilbq/Ts8nmvCiDkpiW6HjaqSb7uveM852C7F/bEXmrAikLwvLBPD2gkz/6CyKVf63yg8gyW5sFDadJ3nl4l9w/J/qXoY3J4U9ZMSk1DCdy0/t1Gi88O5zm25FfEkSSRMtP/w+3IWd1dJpSmf9yhHFwhYNYgusBCiwzVgTUiz2ksw/NrPoRHs3c3d7/wGUwmgban3TMSM7BYk6Qg+yL/5D2vK2Ba+itNyvCkNqAbM2Yvl8nM0BTszqJic4A HRPEYv1q OHrb86LEc1TbLyws5vX4ZltnbbEVqcbLj3a7CcpODc62XOq/S0S1lULjaex2yW/a+ThKiL9kKAQ/z6kBHgzyebNVx9+3HvkQnohjZ0dbpKFc9Ze5P6ilc3BPbqBX+xRPhg4XnYL3R/bVdGJg56gisLQtU/PU4OskOWSwNLNjZSL4jpg6qtZKK/uIqmbbDIo9B9vUjBk9ehIC2ITesI2EIFFMDOEzpNCGbUAnN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: PCI P2PDMA pages are not mapped with pXX_devmap PTEs therefore the check in __gup_device_huge() is redundant. Remove it Signed-off-by: Alistair Popple Reviewed-by: Jason Gunthorpe Acked-by: David Hildenbrand --- mm/gup.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 2f8a2d8..a9c8a09 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2683,11 +2683,6 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr, break; } - if (!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { - undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - SetPageReferenced(page); pages[*nr] = page; if (unlikely(try_grab_page(page, flags))) { From patchwork Thu Apr 11 00:57:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9B13CD11C2 for ; Thu, 11 Apr 2024 00:57:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 44AF56B0089; Wed, 10 Apr 2024 20:57:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F9916B008A; Wed, 10 Apr 2024 20:57:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 273AA6B008C; Wed, 10 Apr 2024 20:57:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 07C116B0089 for ; Wed, 10 Apr 2024 20:57:57 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C35E3120334 for ; Thu, 11 Apr 2024 00:57:56 +0000 (UTC) X-FDA: 81995438952.11.4CF7B9A Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2132.outbound.protection.outlook.com [40.107.223.132]) by imf23.hostedemail.com (Postfix) with ESMTP id 40C77140009 for ; Thu, 11 Apr 2024 00:57:54 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=Nwh0b5EO; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf23.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.132 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797074; a=rsa-sha256; cv=pass; b=KnjnGH4Uy9Pq2ZIFKf8iLXNP+uEdnQmXMbIA4SAQZzeNVB87hC2F7OPi324djQEOz188Zx gl4xJztBKCXoc4SzWiZh3fk/H97upYx8+tA//CLN85zHKEIeZx25X5/ynLkaXipjfjNsWo WH83pkmqhSPGT+WHM5jZYIxokQBPcUY= ARC-Authentication-Results: i=2; imf23.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=Nwh0b5EO; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf23.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.132 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797074; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tDgTssP8CMRZtuU21cL48WhQnx+2efw+x/hmbV5B/xA=; b=EfOg8jHmCATkpyv5o6hSoSBv/r/V4THYn79ak9JDC3osTloZrYBmlFnKuGvo7Ownw1Yjfw 87ZjTTdjRe1fewAsBnXIyWJoKlZlk4zCIzVt4zjFR4UJQJEOxI0iq9cepQCFejPskg0v7J A6OlrftiutJ6TRh7BxtjALbfSQFFhno= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UnRJXF5KPJEYWFzsuANTSEwcySPGUo8RIBy50PJwJcA5qkHrCOG41/XLqv6iZbDWeAad6NgK84E8HaLdxxkYwjUk3QfLQWaiNJvStiqQJCVHf8c6Rghf3dVa1dt9mt/c0RZR5Z8uqoFXm4m7T/rLrZebk5UMLyvBeVBLDuFWk6wHsDpkvW4IQmEXSX64CO2H5CiJ4o/unSbd/VLqU2glHt+1Ho5KDWapy2tvTPKnA29/zcPWVrCmX8wnb0VlEZWxANnrDdEKaa4e9Q8eStqyQSqYCqgVHMmwVciC1RzDy9PUsYs99nAwyv9NWlybi5JKIIcisszOlrHcDXwklqbssg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tDgTssP8CMRZtuU21cL48WhQnx+2efw+x/hmbV5B/xA=; b=mI24J8g5mr8WZPr0iPGDa1CxDvCB2HM5rDqXpBXAe1amZUg5q00f7ityfGrLMDecnheTqIUDnQeeV0CoNuMDdq67Y10PD9Zkj82v81lWrYI2MTa7aOOsjTB6zoEZVKfRZnS81QiVscVHwo994SGz06DX78KbOiHhevYKP1TbZrZ0WADMxCTsLXMK8VTVPzV3l4CXDVahyDflg4S801zr6xlJSBLaEBtYUtKdxH856+p38WfKfr7dtjyoPZlaMpnkU3l3dQtktKN/YFereqRhmV7tEY4p3bmmmuZe9LmIXRNtjKpfB4hdDy7q4cDrajLhwO88DwQynttMhC02Q/06mA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tDgTssP8CMRZtuU21cL48WhQnx+2efw+x/hmbV5B/xA=; b=Nwh0b5EOnt+2EtpNHM57C3RaTig+we7bLBVuLfrARgj1rXmbZrhrJ1TkcVWDHh/ABAJVddlqBUnsTMwALVBtyF87M+Y3PSYOly5ZD9FhivDFdrkdCDgvwwn9InEDq/n71/KxtVtMdR4LKg+HFb8bEDCm9pJ/PhGmIhgRrlHp60f+mIr9KLcyMvpbFutKLX4tWIBuVQfy1/K7L9k/y9gkqagogmltgp1ODfD8fMxXeYGD322+VcoHTr4wYiNu0cf1dbpUqPIan5+K77TR1Akf5cgYKSy98okVfBAwOdiWIZEKJPAWY2T1ktcnCJAi5qqq1QC8pajAc2zftNU07cXYgw== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SN7PR12MB7854.namprd12.prod.outlook.com (2603:10b6:806:32b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:57:51 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:57:50 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 02/10] mm/hmm: Remove dead check for HugeTLB and FS DAX Date: Thu, 11 Apr 2024 10:57:23 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0111.ausprd01.prod.outlook.com (2603:10c6:10:1::27) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SN7PR12MB7854:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6bRh6iSBy8FOmWJlxc2tuymRIIs1QTSn56LoaeCy5iQKaBh3RyhMSvFbZLBjlVXt0JMp3yTDeCXz4ZUflv9ZrE95Lgrxp8vqu6m/cuZ8JX6e09uNVlHBv7nJyEbwpiK4DkFDgnlKReNX2dNVlatNT6vi6s83pU0T4DJpr0ximqo7RrrN5abb5OvAJ1RAWNThrmd0FrtnPMcNdtGrpykmWjy3aEmRZGVni7dNI7C1rgMH/fVECf6HHPnjCaaucT4dSGDWghx6tPr/XP1pc/Ts+0LRwnTr86sEnMYXsnYpiKpnjUcX4ehrkJy2MDJ0070gmwzYA1HMMJ/7NPRSUyWmaRdsB65XoomFdYKevgkqcBg/PgJGBDraCXI9ydAdURO4yKIJwDqSc7hMxAZ01eNWP3VO6diGeSgbCEHOUo7+NSnTKTGAz79CZl5kJ0Q/c9xR+KoSd4xp4Nux/uN8sweSHtdU/onyPsJEH2IE4+7lDibuU7Tnp4Ivq0zCeMBFESktovnCAM5rwS70bmJBxFwhmpqRjGXh3gIMB0FRIzZ79Of/EGIL4ByABJdfP9+5kiBOxdGwSdGi2FjpQ2FLgRWBdkJ+OzG/+rPrukMUaK3lVUqtwYTEOKEf2Gkw0KQE0y6tA0JS9gQ1tsyk+KX/vAjiH9ps084WSMlEzJT8oUKGrLY= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: viyIx6M2PMQ9wbwjuKnYj2Jvhc9sgdX+u8ZVE4l7nbgScx1RmHAInqQK6kaa9dsjjvOrjigrs/R+Es/iTSvsZXY1zKWsWy9cC1LWsx2imuSucjzKuapZskp7z8m1GvghBSQzALg5WvcSEHdNyPystT2NPHFN7SH7Ssuk3gNH/FZmlAYoK1cNRC7SNiS9w2/ZOGRsVFv9KCIZmgpC8lozQ6ZgyZqcRo+O+51kPCdgCNJA8VUsAFOmw+rfHUNgeZJIADYu+ebGc/aQUmcblsAPJ77XCp8vfN4Wdxbaxk5vKpBLGHopU6ufgttyqLPbnlaixtynN4D9pfNZnBCzHYH9Wkx65DcuIgD5f6vu2TJDxexwlyVo7RaaJP+BIdulDLd/jKjBHoC4qORk81iaWq0Os2WQpTvJCvCHYWtvnX98tSv1469Pld03T49yY91iIHFEi8M2GfO+9xe/14FgsZSYygboI5+s/XkA4eibhJ4lfJ54uAF7nD6VnMarkm/yadDwkbRSeZg/q0TDFJwfidjZVGQ3dnqm2Dk4RkmCoMuj1AU5C0fuu2hIa5rYeatxKqaW9xBBpvcj9tda9+uOiHess3u1UFl1alhN8IQ75DPVfEAiZAQuBKM2Cd2zUS1TjBTGvVeOLJqWIyc5UWntEdVaByKmlT/5AT5PZJPyigrS2EbFdWiz7Tka16SJBZIrsGezLTju+ZUIfvksDQgSOv2uUhYeF0cqjAZBYE2B2sq21Sm/mPvnqQdMnvpQOxZ635kLgeUqnY7ng1QP8gOhjfhFlw/GYtZi5kxujPBJH0MzjTTlEqEhUvfjm/4py4Z6QfECIucOIp62a9unzChLj8e6FEjbCEeUULDZFQeLgdXCYUbDOSQmewOJbf291JMlQ4XKyLHsb9Fe/p6+L/MjTFLc5O+kPBm7G4ZsWjhCgSMVgVTKiRXp7OVIcVYeZxBoHvqCHPvOxrYzH73bfj4s7BLIJBIE4fvVHpzHJSNfGr/CzSIBoP2ggqFSsYyP5UVNSRaBiMtS63iScVJpfBRxa1EXHi8uhVdxSKcBGTm1dw/JGH8Q/mdnZHmVgoK9btctRJsrGRMx2rAGbvi6Yg9P08OH0W8sIYql8usRH7Xojv9Ut7hq1+AQ5Mzgmf6+CgXIfcI7XzFhhyw31ran8Uhb4hwCTBqTDefvi3cZ5b8I9Y8xC8ZeqDIlO0jQenS9BWcC4QAiyIGkqO+N45d8fScli0ZzVhkOSNqJrHnYDjz/tjRiy6MbXddq3aVsc49Xo3w1LKDP4Pti0/6pNBf+eR73SCTvsuCR2OkbOBRtj0bIHNqnyfISn6phahzJzaC2UnmfIQeO4qbQMbwvn2L/aJ9PP15piA2WA0C+kWrVTVjdtpMloF+GfOgVSHAbzHisWseFc0RXznpgtWguPdpawhKgWMg07FCH7RvxCHkXJesx9BA6MoKtOJrDZWew9LNVU9P0KykkOpgu4Zk/9Jps8nz+d5GqOPjohy7ntSzS8Q4mQExHznatKogPbMUKtbI3gm0Y+Fb7wTMeo87EPZUfyE3SjfT6TyvxgSAILCdrYRg4DQUlEm07roCYtIMXbDuq4b2EBQS5 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 643891b4-705d-4999-1a10-08dc59c2697a X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:57:50.9098 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: bECjJ5cJJ2K5kivc22b/MqxFC0xpTD0DJZnclo5mXIVnGTJbYoVqY3m7jaKBdTLla42PGNHgs5FOAMXSsJIjDQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7854 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 40C77140009 X-Stat-Signature: ohbfdsrxmf8dci6e6unum6hawcppjin7 X-HE-Tag: 1712797074-124076 X-HE-Meta: U2FsdGVkX1/4gDBBwwDxsehpwkz6uc/Xmg/3Dq5pThkkIBP6YWJ/h7I21dFCLy2NqRQjEY/fRnPp8yVIOu2fQ5Z4Ga6vl5uBQm0ryv3Exh+D+YSWkKLKobUKsSoENxfxNm3crNNGg6vsrmp59Efy1gS/Nfz3uh+FtBJxo/Rsq/kJOWLBEIitKB16Dx2v7yJ/0A3sZcf929FY2kWAOYMjGeGFzBAKdzwxru8/C5FkGubfCoh6oBRBCr8fwhCudA7SUmesZVipSS/2ySHjqJnL8oEKKSvxiTK2qqOwzKkN8kv9dYVhV/1MZ7sm8vaGjFU9X7lRbnmbgpnUU13Q/DqQAFVGRSdn+SBxgJysH5VKujHJNACcDTqC1Qzsnxq/HDOHarzw17YMiuwee/wNyHsxyZTJOsUNNxrgPUBDRNShal6xQx/hsUnKnNcxqHcTzJWTnFhVBSZhSepb6vXmsIOx0uWDIu1Mm/Zn1w1niuOByhri7+EA+cOhoRIkzNHsaYn34vgrNbXtNBXVIb0iuurKM/qZjExQfA/f35vQXPMFYHwkaSX+z3IDAxlJ8qKQaaNTJTWUUOOZ5Cf+lrZaKRG09kGFgENY1wl/Pt1rze88Yf6K12EB66qcpc0Rywa8+V/XITTyLS5v2+XUAZp6Ix9w4pcGgsSVBFygEDmA7kJsZKV+/y9GXfrjSR2Q4NxSY0H/SOa8uy6H0xcgh5l/OMnIouBJ7BGVWDQ8525GMjzdH4a//x0sJ/B1DhwR89ate1TXk0BvudKtE8Bs9cIo3++gstipfB6HTzqf2o2Wv9pRv4eXz2motGk+xI+wHcgJe4cw7gfSaxytaPhyATlqlrI0e5ET0OusyoFnUYgWE+emguTl/DYqdeeJdKz37B4vc0af6Gss3MNEd5SaD0tfbc037ExreK4HUqULwtcQsaIbBHg9Kiw92JRoHVf7a77jYgrF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: pud_huge() returns true only for a HugeTLB page. pud_devmap() is only used by FS DAX pages. These two things are mutually exclusive so this code is dead code and can be removed. Signed-off-by: Alistair Popple --- mm/hmm.c | 33 --------------------------------- 1 file changed, 33 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 277ddca..5bbfb0e 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -411,9 +411,6 @@ static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range, static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, struct mm_walk *walk) { - struct hmm_vma_walk *hmm_vma_walk = walk->private; - struct hmm_range *range = hmm_vma_walk->range; - unsigned long addr = start; pud_t pud; spinlock_t *ptl = pud_trans_huge_lock(pudp, walk->vma); @@ -429,39 +426,9 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, return hmm_vma_walk_hole(start, end, -1, walk); } - if (pud_huge(pud) && pud_devmap(pud)) { - unsigned long i, npages, pfn; - unsigned int required_fault; - unsigned long *hmm_pfns; - unsigned long cpu_flags; - - if (!pud_present(pud)) { - spin_unlock(ptl); - return hmm_vma_walk_hole(start, end, -1, walk); - } - - i = (addr - range->start) >> PAGE_SHIFT; - npages = (end - addr) >> PAGE_SHIFT; - hmm_pfns = &range->hmm_pfns[i]; - - cpu_flags = pud_to_hmm_pfn_flags(range, pud); - required_fault = hmm_range_need_fault(hmm_vma_walk, hmm_pfns, - npages, cpu_flags); - if (required_fault) { - spin_unlock(ptl); - return hmm_vma_fault(addr, end, required_fault, walk); - } - - pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) - hmm_pfns[i] = pfn | cpu_flags; - goto out_unlock; - } - /* Ask for the PUD to be split */ walk->action = ACTION_SUBTREE; -out_unlock: spin_unlock(ptl); return 0; } From patchwork Thu Apr 11 00:57:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3772CD11C2 for ; Thu, 11 Apr 2024 00:58:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 868506B008C; Wed, 10 Apr 2024 20:58:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 818FC6B0092; Wed, 10 Apr 2024 20:58:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 694266B0095; Wed, 10 Apr 2024 20:58:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4AA8E6B008C for ; Wed, 10 Apr 2024 20:58:01 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 185531A093E for ; Thu, 11 Apr 2024 00:58:01 +0000 (UTC) X-FDA: 81995439162.01.2AA1783 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2117.outbound.protection.outlook.com [40.107.223.117]) by imf18.hostedemail.com (Postfix) with ESMTP id 274D21C000D for ; Thu, 11 Apr 2024 00:57:57 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=U75vI9f0; spf=pass (imf18.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.117 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797078; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iaAj1H/CPKptiJB6pK+WOt1+rhxsFQDPvkiTAdo5S0Q=; b=yK0NEc0tKH3a/8gZY6AJUwYduwR1hqrTllLHQw0oUjp16mIB2T0v4mCchFLyJ9WJA1m5xd ux5P/buf6RzfvT/DdiWqdmS0G3suhDtkagVi6q2Z6ZAVoVhGSnL97I5fgHeOUQWM5FBbvM /L0bSVDLQtebE1WUAo/+r4EVpkQSsfM= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797078; a=rsa-sha256; cv=pass; b=XjjzlXdQt5CqXxG0Wd0uRIjpVWAGfgONZCst8ofyCp6Zn1VI6HeCiKJyHqhtWgA0cCfTHH q5dRTKpM89FtYFA6LO8y+rQwQBiM0uXRGNu7Kr7iqOYmBQXZMUwKzw7CYDxzUlryDdi0Yx MnO7041xndA3NMSQak3BattUreSfmpA= ARC-Authentication-Results: i=2; imf18.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=U75vI9f0; spf=pass (imf18.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.117 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CidqprvEPMfM0dqDMIRnzNu6FPd7S8OmZrGQvjutX3KSBeaetj62ZQLk3PqjJVWx6xgH1akZgK51oJezxxxCYkLt0S9v/rPLh0ae+jkC9/icWxI1mX8bQUBnZ9nCrc7bonB1EB76+5djVZmtt7GU7QIMmb1LTydbHZdNDkgVoLwZU6l9gJ3+iBX5jVeiW1oFZqXptt9rdp+iiNojd2yZHVs8vOiwQ1t9ZjQug7Nj5H/+ZyvhgNTqdrvYXw5Nrt73dnSmUhI4VhXCiIn+rwd8Wn4zfuQsSW+72G5+4yoARzyReKdgPhP6kKpb0Y7TkxM5jsTWbpW64QalD1l01o9S6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iaAj1H/CPKptiJB6pK+WOt1+rhxsFQDPvkiTAdo5S0Q=; b=barKcDAGf4j1l73VHHSiLG+9tCbkyF1wsnlwOd8CBhaCD/WVjGMXfdUAE3oQqUIyojiZJ0pgWBImQDGTmXuZbJGUQK+MgNI8oXiki657T3i3LrIy8k7duU5b4drdsEPayazPJuCeYmQRSWSa9LOK3CK33vPgFcyWHLtXzT1eVc+XfaeLzhr3wePWKbTbZJRdzn4DAZ/eXx6j1JpHjFJszMW3GWcTmizPbDF46eQ3wsCK7XNikFpj2uY+XNyIpaSsHmo9ll3+NrjflIchooqyHOO3xXumh/aoNz9w0r7dIf8LICm16gWWE6Rp0y1s+aavRoaGNskSqRTFT+TwQ5EKMA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iaAj1H/CPKptiJB6pK+WOt1+rhxsFQDPvkiTAdo5S0Q=; b=U75vI9f0+FXUyrPpqdGiEWApL2ueRGSa4qzaRSReD8u9ZV+b4rkruuYKQs19udEtYnl22dOYhHuUwCjtFdV+wHxM6w8bcT6crf9Ya6GT4Hy/c0Ko9d2NQjzbj7GIqebvGADL62E/3Qd2oYbt/iUQ9Swsst6pOSvajPLogwDBTPlV2E9SvIUTN5E8aX/4toConYrbJS7y7BizsTexQfx7mA+RAEJXUqkiYnPiYLPIB8n8ibHp4asTbXQCsvrUUy4AkBK83soUDxrJ54Wx+FFO7FsDp54YITrQ4vtsjKX2pK7khzmiyEqag3fk1nvDcqZwaAOaO2rdYMCvm+KZolHIng== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SN7PR12MB7854.namprd12.prod.outlook.com (2603:10b6:806:32b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:57:55 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:57:55 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 03/10] pci/p2pdma: Don't initialise page refcount to one Date: Thu, 11 Apr 2024 10:57:24 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY2PR01CA0004.ausprd01.prod.outlook.com (2603:10c6:1:14::16) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SN7PR12MB7854:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: A0ob9Ekf97kyhSrlUaYGt1+szfE7MUqxuDyRZbsJYNjH5JlXSZ7d7ETwNASponzmFyX4VttQfhU2sex5R2Vheyt6gE6eiMJPWoo/WKd184jZwzt6OPLV0PX24gV3DYF/atBWDFN795x8O9/bsATGRV2jRZ1tW5LRI8DYt72jazOw5gQipUCLCBiDEg1F5GamlIDJpSzRmOQrz6d4/jCIXGicKrQNaDyIv3GRqcgIvAXkoxKuzQUbjQgTGG0MANQ587DhyYTVFFQItW+lT71aNQ8Y/ZQ6WLKphdfhAhswdGDaK0taudRO79LzanOD89dZ2qOM0UyK7RwrO5t7rNUMPtkzkMnoeaYkIx6s7o8cLLqQvTIE1+Sj4Z+QMrxYwDq7eBpW6PjwTJCvSDrANCHke/UiRRGTD+ORZX2fFNQysu6qfg8GqAPncH8S+AGmVk/3zZPwTdNMkESAF5VOBW7hoVLTDC2dQiik7Bruku1FabGgFWJSDrJ5IPMC/IWOxg74TFGubRJjiwUYm1aYpzfaHNDc9wxZos2BR3miot3t5q7I21xv00eFYtJK88Bfbi2+g/OsnxpCu0DVLZWQKldx+VnwEbBVERYMyQBkhLDxm/NSvSQW9rFZ1I5jZ9TS/lPk8mXxUa2AqXDyaviwpNf/Mw== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Q2myOfzcH9piEK42GoCzXzCj8GssQWGsJ84qECaoaiZuq5ksrOeLl0KxuHhZsPJL2CLWrcnk4tOUe9TkA91WaJGiGypXpvwZy7mvNWGng9rj65oiOmoPAyL7T+h4EUmSGcYiu9LN4qEVoZCn3d5mivvcz2jyI1QIso399E30ZDgzaYVy9u379Dtys3CPPkDQxx9x2HqM8EKwKDncpqDbJeGRiNsw5OJLyE1k7z52Mc2O8BRptcl/eMR8Luh8IblaBaXpoezUENRWQacyJv1WlGdKID4csbS5S7yJojQsrg7tfWIWYoLS0AtyO6ttQTzk+WgrQhrLKpQZCZXoFTvGTJoAQ8u9D64c2p908pNJSNoBoxkBOwLp0Tc47MbGpBCS+zMa8RR3VPhu7t+SBsurEn7F/ELCCq6uHe5nOad7gBh48UdaSddcpwOi3RnDI4hBP+G1pNffP7epQDstFpBxloPR2ge5yScFv8SBVd7r0iQrG85hwjLHZqW+lBoG++AzcCTb/eJyHkVuvOBzLiEugOI+pRnVw7vEZqVFBegtR7KtEIuHRtiDSJcyDGsDhOKoWQaVLoK4kpTflRvyfg5J2nA12TvAvr4YiecuOzj6r+CEgqBsaO5/6fwAWGF9kFOTLvqs+uwfcp5BQKqUKvEcHIboL+60d2oKY/ZCIX8UfZy4SM9CFeY7Ys/5BDGgj9k+gxBcsJoIkrRzTr+eJyV4Ni6cTUtju2YXodBLamZrdyIeX8BN2bmUNrh6GUFcDzfn8KbEZ5Xvx8dN3hMMRL4ns/HxG8h+UQB3wcusNT/PVUncGRTLW8yurJxOnqoetyODSS7zaOgv0NA55hIRZqvaKTupYoVLCWTosD7XcyCQ+eU5G930r3vc0HwhMZ6uZMxl6fLCkSW4oi+g3KEnwFdNqpvv6LYg+E/sDV2kdTNhT1v45g+QpiQlxA5SFfub2+8SHyo7SdXtqLeOopjqMRrnTksyNGECXk85yaLgzuoeSJmKmJGlHZs6f4ad6TAizfY8ZxLx/BHCNW6eyE0NgcJgov/2yuXyp3d1fsTTgPmUsG0D6WLu7o4VRzqFaCz+UYgJXKV/CNx2Th4TVmuifiiXqbGV0H5WFhQls0d1ZnROEiAuVa3HF9Gh6tCpbGNnyZkAznEcYKN1CETWFvvEXFo8MXSi/yXOp5vi4WyLTLlLxX/h6Hb9LmG9dJlaHO5Cinm0Q4pFiIFgmlBbJcAloMwOlSIW1R/jJTcspmeV28D52UILB2T6t1b0nMtuCR/U5NsLyoc6gRkDt27WSd7xCJSqEPYrLWEuMhivJKL/cyfQ5pE1C7fzcAZsUDdig2OEXgn6z4YunQ4oLHpoI3W7424U0McO/kwnPGITXDd6aAOpFzBqlY1VhDpxFD9VTsx4Pp8Svc7sf3EEie9oJ6qcNusWwE04ZRR3IcjYsZYRYdbIPclG/MkASU79uWUITIss4AVq5hkUyNDZ1ffnCnqmv4E1dr8go7YvqlrcmYMqsUe+vevlPjxAFiyR5a6xVxYtMHg8iBlPsV0SXAhl/5i4bJwbuEhW/sjE+6rKiSDdFYxCUpLB7TfR2ynGcMVtgmatEeBY X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 97c008d3-71f8-4d18-9638-08dc59c26c29 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:57:55.4109 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: BluJMLpwV3Z/xzsITQRcLc8+5JZ8HV5EhxlIKGm3g/2qpD8I3OaBC6zIxp60YxinhB3J2W2ZImTt27of32Bv1w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7854 X-Rspamd-Queue-Id: 274D21C000D X-Rspam-User: X-Stat-Signature: 6j347ceizrkhu3di86duptc7ytzw8a4s X-Rspamd-Server: rspam03 X-HE-Tag: 1712797077-23881 X-HE-Meta: U2FsdGVkX18C7GR64N9AGWQPaaWiY2/sCZG3ScKffxxfTI2rHtaNjEvlMvgUci0uBEm3krriNYnhsV2eXv43304wqFYN0r84r+c+m0S92vdet2NKdffMdHaF8bqjHfaR6k4lG66G4N0Wjd1xtKmQSIcUAtdBA0wM3c/anqbwdoQtKDBv129Ve3BsIkvwdPGFCj5RBIwPjlR2qJIn+ky//iQpJZMNv6Zb9qmDRrouD2mBARZ8v5CaZUX1DD1Dkg3tNDKdpyRvKOkaZ5yXru1XuuRSgF8g4G0l1iOqmN7yqkGpgFm7H2XGmhF/tTdiI0AeeO/KvYEETjqPbg3qZ7lN/xrTe8F0MK3pZtRJX6IimMXcadt26CZzZ3I0TH1Jo8GEYZcrVuuvT7CljfG+Ed3nf7Lbmc17rAr1pXLz/XIKfHCH0mUvnyyCMXXT4Epy8DDHd/xaqZMPCp1ZDobphsju1e97tED5AtImrAjyG0aHPqJHic881yue2bMdOJLX4hxc0RURp4aZMnuWQnFSk1bRxy39KsCNAsAtH7bDMMGxZRKqZR30KPLsNk/RgffHAsZLM5Nbfhq4ShNnm8XLt04LTkcFAr7KGx4Mp+bP4yEZI1STL6SgLunAk3xB08ywQkQDOg6tY3AaTl6WKooHf5YteG1Yr+Ko+TSr5pSvf81jT4YWMwCrWlN/b7vBqlbbPdiPgrEsOxLnyEBaMe5aNAOJ0N1HOWl88bx+Ushh481WvtpKG0x5HoJ//utyXe7HmGnNwwYmWQszLSa9xFF8XbYyD1srEu/d3RSEvQbH1c9Wm/iymL6cHsdHC4rh5WHWJ86exApPetCRCucIWg4L/n4Hwy4Tn9Xj++SPFZvS7zVjZl7H7ECrFps8ATk9tJUUb6kp3wLq22zkmtIzrsU+Ubyfw53nhGXegCMzbeboGi9eN7hXjVZAV6m0oDs4lnwsRyQvQQtGFAPjuiGFUp8GCJM MyifchGS aU+5X7xmIEVJqDkBkcwUjZfVp4vCutNXR0T6KK4XBOSXfFqbafyufmjqeCHYIn0vygzZsdo7wHLvnfwDfYKoaT/hNr9oqnb5mlR8L4vozkSg5Xf4d/QuhK4qfPoj+P8C1/eKc5PHqPkWJcWeMg7lu4acV4aKgqzSGDOMhr7UiXL8wKcsqtZub9iVnc3zHathY4Fheepnu6do3GMZHoUxPn0xW70XfJ3XGxzRBGdoWD6qBi/YzyAw82SFc3KgylCtpZm1C X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The reference counts for ZONE_DEVICE private pages should be initialised by the driver when the page is actually allocated by the driver allocator, not when they are first created. This is currently the case for MEMORY_DEVICE_PRIVATE and MEMORY_DEVICE_COHERENT pages but not MEMORY_DEVICE_PCI_P2PDMA pages so fix that up. Signed-off-by: Alistair Popple --- drivers/pci/p2pdma.c | 2 ++ mm/memremap.c | 8 ++++---- mm/mm_init.c | 4 +++- 3 files changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index fa7370f..ab7ef18 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -128,6 +128,8 @@ static int p2pmem_alloc_mmap(struct file *filp, struct kobject *kobj, goto out; } + get_page(virt_to_page(kaddr)); + /* * vm_insert_page() can sleep, so a reference is taken to mapping * such that rcu_read_unlock() can be done before inserting the diff --git a/mm/memremap.c b/mm/memremap.c index bee8556..99d26ff 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -508,15 +508,15 @@ void free_zone_device_page(struct page *page) page->mapping = NULL; page->pgmap->ops->page_free(page); - if (page->pgmap->type != MEMORY_DEVICE_PRIVATE && - page->pgmap->type != MEMORY_DEVICE_COHERENT) + if (page->pgmap->type == MEMORY_DEVICE_PRIVATE || + page->pgmap->type == MEMORY_DEVICE_COHERENT) + put_dev_pagemap(page->pgmap); + else if (page->pgmap->type != MEMORY_DEVICE_PCI_P2PDMA) /* * Reset the page count to 1 to prepare for handing out the page * again. */ set_page_count(page, 1); - else - put_dev_pagemap(page->pgmap); } void zone_device_page_init(struct page *page) diff --git a/mm/mm_init.c b/mm/mm_init.c index 50f2f34..da45abd 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -6,6 +6,7 @@ * Author Mel Gorman * */ +#include "linux/memremap.h" #include #include #include @@ -1006,7 +1007,8 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, * which will set the page count to 1 when allocating the page. */ if (pgmap->type == MEMORY_DEVICE_PRIVATE || - pgmap->type == MEMORY_DEVICE_COHERENT) + pgmap->type == MEMORY_DEVICE_COHERENT || + pgmap->type == MEMORY_DEVICE_PCI_P2PDMA) set_page_count(page, 0); } From patchwork Thu Apr 11 00:57:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625226 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 325A3CD11C2 for ; Thu, 11 Apr 2024 00:58:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BC4586B0095; Wed, 10 Apr 2024 20:58:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B73D76B0096; Wed, 10 Apr 2024 20:58:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9797C6B0098; Wed, 10 Apr 2024 20:58:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 729A66B0095 for ; Wed, 10 Apr 2024 20:58:05 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3DD354098A for ; Thu, 11 Apr 2024 00:58:05 +0000 (UTC) X-FDA: 81995439330.24.3C18BA5 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2137.outbound.protection.outlook.com [40.107.223.137]) by imf23.hostedemail.com (Postfix) with ESMTP id 70DAF140004 for ; Thu, 11 Apr 2024 00:58:02 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=nzNiGa63; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf23.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.137 as permitted sender) smtp.mailfrom=apopple@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797082; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Y/EFNZYda3h5NnSqsdMPpS+So1trQ4UQ0pW75JDKrRo=; b=1L9qO8+Ngp3E2Qa8hcunh/qCE71NXa4gz155bq0xfkNBaXzvrKccmpD50nkbK5o7FORiOd OXb8gjtt6VXfc+5OUoVNLPBqW/5BZHtaHv7z3Fd4z/zC6x0iObXWpYPD/m+bs4mIexX/wZ wPlVRbeDPZ4GOJI2Nj7dZDhaIuA/2xg= ARC-Authentication-Results: i=2; imf23.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=nzNiGa63; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf23.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.137 as permitted sender) smtp.mailfrom=apopple@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797082; a=rsa-sha256; cv=pass; b=EQVYfkv2uUwDN73N6fu+vbaeW2AKMTpcFlfOaDjFzvJBdIcvybxNUPikybBVGd8p6lKxwq RhGenOTAq8KkmLrftTHtW2Y9HmrwBhi+VmEjY49sOgNIY9m0pPCiVXnubkAyZxSISsJA4S t6Xx7V7Jh1HaPtRxXPNpUu5gHYZQmlE= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XOiePBco8RHKwptwg5HfjAU1e/PTpjUTcxk9y4NWAbC/fNT6X5s4tn3GoVyYip0TEKHUsSA0l4H38weC+ntcXG5hZhrjKtZTZrNkB1Q7dvMyERRwBaFzbL4vnKZ+TLDvyWH0eUSmPc9fkB8kTgK5IFnQoVUhPA8pruq/HG8UdeqkA+jaaO9ttx+0gk0qFv4Xmglmn2QiY2OplFwuj2EZNwN/NMCoX08TMkRZiww7sCMx0kllpeUdbJMfV1oINKH3ENI+IoXTySLcEsbrk+65HxaKkKJP28YjXtkTC4dByoWWFWSwIbpxJeffo/m094I2bMihbfTu2Yii8aYiLhj0fw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Y/EFNZYda3h5NnSqsdMPpS+So1trQ4UQ0pW75JDKrRo=; b=itPoZBhfXbNt5unBmFh1eQ+hf/3oZgUlylpXeF6jjpFnSj4MkpjJpJgGOg53omRZWVopFDmRm9m7itLHHbZIzNWcxgC8il87tvxaa609Qiji5Etiejpd1S36wBlycZar+vEAB1hW92kuO85OhlXzgNL1QdkgxxXtRgZvWifXSY4oqZMtaBHvl98dxAZHmQ+fPYmn6abrIR+yFqDbq4hW/41+pSJudXxMnh4JHX2Np6ym7tr6V7jQva9uqrOtEV160MxPPODptaPqTnsthwM22p3rCH3jIj+emCvYEds/R2HBRXYKAbXj9uCgCmJaBUK+AyRmMDio4iWuxpP/ZQHzJA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Y/EFNZYda3h5NnSqsdMPpS+So1trQ4UQ0pW75JDKrRo=; b=nzNiGa63gF+2I+6JXTx9oKd657LsHdgtfzgq/S6q3oohRv3UQzNx+mHZbaGrDyyz+VtuAcOYOjT0DeDC3M0S4X2ypjuJA2fed9V3FwdVwPpVKXg6G+J1YSyy0iJOZ4JBHaYPLYVH0gtRsaqlq+aVDEI5gKpe0nmr8o+A4+fP42eeqNMSvgrrazSi1q4WKu3dshzj9/YCCIjlLvvbxVGAqQEQMTXQiWsiIyShkDAe88XiGhROkB/cEBUApAflPBKig7mAN3jhuIa7znLNsQI/xjDMWzwRKfrHWF8uc05A+RSrAIFT4kiklNg6v+R7c4eP1Tf3c1KyOSl3w8pZZQuEnw== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SN7PR12MB7854.namprd12.prod.outlook.com (2603:10b6:806:32b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:57:59 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:57:59 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 04/10] fs/dax: Don't track page mapping/index Date: Thu, 11 Apr 2024 10:57:25 +1000 Message-ID: <322065d373bb6571b700dba4450f1759b304644a.1712796818.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0114.ausprd01.prod.outlook.com (2603:10c6:10:1::30) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SN7PR12MB7854:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /czrOa4Fy11p/on0hAzjxlD2XX+k71O4cFiRszZxxk+97vWp6hHXdYn1vUE8FhcvncJ345huDmL8uyzSpjRPjbQ+z6do+B/1oysh8gFIb1l1eCpWfxugcUA/fGoEVy2OaTrdbhoXOJ0C7yK+VfzbOMDRACFtQBM0olsSIwuC4juiV5J26vjpAhzdXYgc2ejqEL8u1FA+VXTBmU44X35J3FEcKgEcBMm1T67dQkxM033hcZdEy5gDNcbySUvTo3eUNvUkINJ+j4AOt51Q3YiKI2hD97fOmzWmB+aLu2tMBTc8AcsoiOIQmPkJFQt2sXqmkBB08M06jZsvoVi5UgF2n601eto53x+5gX7mvD7JEqbq3I1qTe3U8FVW7X4tERfR59q4KR8ntgfajuvGP6lIubmDg9RuN+SQeMMR+6wZ0jR8mo6C3PKVJQT2rvT90RKuVvdg3d38I36iRR/iB985K7O1jLM+8ALPOPOwE7btmc2zuYz+qvZEOnLgkTvPtSR6sOody2RE9BKlc1yvYMqWcFFxfNDeiZ7K+pLVwiautiKyTskQ7tL1dHf5KQ6EOoP8/mooEkLoedV/OLIErm1j2l+qJVKMXCfFbjFjoIjveG4S8m+yRwUHHdhPBvhg4Xm0TxwWPO10pSQ2NAHwFNER20eNw01gdOBirwrOO0mG1Wg= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: WE3A9ip97KapG/+f8EzZLJHgMvuMfcTDYXqKI3LLlgpPAxCmWLMya5gWG3RuoG5QPMFeqYN+5vyChEPZGqyzUyOl6tCDL2bY9mofxVQdqa+qsu3/5XVuZ8Ahxcm1kHj+mpnJXu99QQRBozvRAz9eYqheMfTQwBS32567eZq2nrYZH0yw4Wpcx0tSYfsgbi1497yylQqZXDqRLVU+JaUD6mKdxQsxsCXNzHtnHwO1udurSrO2O7lT8n17Ow51MznUg32zY1Qt+USAoa1u3W5s3bk0aE/WbgL2u+EyMOv8q0TsCD6lSUaXllmYvIXZgmg235/U0YB3dR7i+gkf75HR+//KdLe1l5eeCN+5u4By6DsbtRa4QAechXp+mC/rwXp9qlMbla41GVic8FekhW+YEg3pYFe4caHyByASQBrSvrRPSdVIoXGu6PEFhCiPWMLJhXzMXFi1XIPBhwGM4pUXWQsniVE0sp8yaLNLrfJXA/ejZQbO9Vpl/zHiuoxMiaKHeXpX6aH2rQaRRDH1HeQkdiUX1bBBRP1wMoIQnAI8AmctJBH2RC+advzt8SKVVgJLzRnOuwaGAsUrbWabyAjkpQsv4wOgNSnRssnP9ozeKSU7dYWo7i3v093De1WPxIR4/iqnnwgG0JujwgU6SKj8WxYEV6Zkpe1Oq6395+j5gjJLypJJHqbT12xlINC7YLxMCIDLM/xjS9jrqiuZ4nORJ2QD9gYUOWU2M+spkR8OEtkzbDl39mRGS+vORtFBr1FnEeeufbKJnLhXiSFzuZaTf+k170oOOR+DCJfkh/hUmrdIuvYha+dQMDZ0nsZajV7u4C5OPfiIFMqm7ANS0Pat/PBzGpSjScthV65p7uBK+MaFYCVe9w4xg4uRB1YqPGBNg1Cl1j7NhxjdrwZuZs5gsBn1zLdfdhw/s+QbVL0SWS1Yfs7WFkFY5nupsmQz8GfBDF7UbdzCZN8XuVVcONzB1iZSFjIUCUkgh/bzO8KL559ejGku5F/9rtxLPEI/dFD4CoWv62NW3fKYM/QtuYuvWbC4VuuzwAJuAiKnm/7y9FfE9sYhO6u6upYSNhrHH8gS/GTiMz/XSswfnWify8a6XAVPVoa6ckxzKNycWclesLZyKmm8iavNQa+Ur77hMxu3NpmPghJM9yokoC0WX6GRNvLWPQsHEjzZqxKbNfOiZT1Un7mcPRwNwi7rtE9KgrLVGV6XyxSNzwJEXq2tMr34o/wbYyREitD4wNDw11fqfgXxbrLr28cF6OumHvs+9r3Y5y2bitqHp0HZzPGBhp+4XxG6BRz2bokhrAGTGn+jPS658y3tqtNofFxJjW+VtGzk8YYgT8yuQJ2YEuPAYGB2NFesVKip5HvFeSYzPPazDo4AbxiISj03nAF7HhtW5W1fk5bDxieHyF71ugGaHYyBOM3wVxFVx663pG0Wk24xkCPVOabNS1P28jOcMnYre8EBS67u45iq5gGwqy/LjnyiEMg1ArmQ8aER9GBsLl63dZfqEUvEziSgYtN9ASbD+4QDqv60iLa55uQElj76WWxxTjs9AqAqGMJUQyPrh/DmfCv5JvxHuY3Fk8lQbFxmyYfL X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 191f8f24-9ace-4ef8-11c8-08dc59c26e79 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:57:59.2901 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: dslfj1PKFgTma/KHcWlo4GhpokDf0AyRLgKBTRDOECLdPpGIo7+BiyFzK6Mop6xz3rAtLNknPLX48gCKveDmFQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7854 X-Rspamd-Queue-Id: 70DAF140004 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: m7j75xwfethr554tzp6s8adj445xzpe3 X-HE-Tag: 1712797082-44185 X-HE-Meta: U2FsdGVkX1+aqf755aw9XrLKTpWj6khcRmHxmXhPdZA/XOs6cF+PvdkqUDIq1lRY/bNm7CuA4ECVOaB3g1AkPt6DfIW2n8LhQg5C0pBK6RRZr2YV9K03HlFXtK3u/hR1r3CxRm1DfzpomLfIiO/wntiDIZ5nW4KiWJ1KijGhf+esnD41FBJLxqfZG74aXmaIAdKq4QI7ll7cMcaIB8VYYP4CZBtdJZ1GNu7LTYPFU0L2gq/qVQA1tp8KuelpRi+omJ6G801MZ91qeWAPANPSGOYcKuLbeYlOw/zUbawk0jurIpYOzOPtZwzwVg8Qi4CSVI17DmTt3YcdZhJCVeuxHZqq9hbpsipqAFTL53cqWqMmIEJYQH7zWvK+wpCTgRV0tlo6NXOUpHfKI/zV24oMg4D8FUFNle+f2lIESoVOEl47xOfDxcTwC5gFPAzShR/iED3jaGWdpSgSJGriezpFkfnHI+P+rzHvglnE9Dj7tWJm5e44R4GQ3rvOBkkIAmTqO4rrTDl6B1li/+UluVB/KxkRPZrtrF0oFtWm7EjzqsLKeNNfXzZ+4IBNgeJPUfeBNRufkSJs00puCS5dDi/iGr9tRKo2ywARbsjke0WzRwOzwn3r/tvhgzvTfH9zxtqgeOmADEwX1AXffOQUFj8aMBt8SjGua/8BUujkffWt08cb7vZregiJQpHlmcEbZKV/9pjzu5JWQnsKU+O95uAvhXrZo8D26aLDW4yd1wBo5rcPW+NissLsUAtvvwT6MZfXbY/ajoBEiKJK1IApCA3DtiPfxYsBcwkrHMFknrGYlTIdZbpuHWMpeoCYh2drMfYCLnhzlwB4IL4qXi/n9VmH+qRFVMGSXQiQJnOF12uiPF4FUFGlpEqy4iFY+QYk6wIH45lL3bEVgBxTU8PXhlvNPjk+YoO12BAfxVSVTf9L6+78JiZ9CxMMSnCIDH/RYugH///xv+3HL1+oQdTEoiz krBj7mL6 6uVmQt/qKFD0aNSkhz9hMkwt17HwPRo62RH5XE4vbMzaOtxJjnZP4uycHPqGKS13O65XsvwZLGxJoiC44hldldkelO/CuPcVyCJhi6bOlUjF85e8X9mJpVpW0Ydh40PXqzqyzSOUMLyajQKrp6cHFH3AHSFfKzzxUDCYSmEiuSxlcFtDBiG+NkbiXJf4xIYmBOiImp5GCcQuCD7AbDN0ii+fT5ZnWPUWgONnw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The page->mapping and page->index fields are normally used by the pagecache and rmap for looking up virtual mappings of pages. FS DAX implements it's own kind of page cache and rmap look ups so these fields are unnecessary. They are currently only used to detect error/warning conditions which should never occur. A future change will change the way shared mappings are detected by doing normal page reference counting instead, so remove the unnecessary checks. Signed-off-by: Alistair Popple --- fs/dax.c | 84 +--------------------------------------- include/linux/page-flags.h | 6 +--- 2 files changed, 90 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 8fafecb..a7bd423 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -320,85 +320,6 @@ static unsigned long dax_end_pfn(void *entry) for (pfn = dax_to_pfn(entry); \ pfn < dax_end_pfn(entry); pfn++) -static inline bool dax_page_is_shared(struct page *page) -{ - return page->mapping == PAGE_MAPPING_DAX_SHARED; -} - -/* - * Set the page->mapping with PAGE_MAPPING_DAX_SHARED flag, increase the - * refcount. - */ -static inline void dax_page_share_get(struct page *page) -{ - if (page->mapping != PAGE_MAPPING_DAX_SHARED) { - /* - * Reset the index if the page was already mapped - * regularly before. - */ - if (page->mapping) - page->share = 1; - page->mapping = PAGE_MAPPING_DAX_SHARED; - } - page->share++; -} - -static inline unsigned long dax_page_share_put(struct page *page) -{ - return --page->share; -} - -/* - * When it is called in dax_insert_entry(), the shared flag will indicate that - * whether this entry is shared by multiple files. If so, set the page->mapping - * PAGE_MAPPING_DAX_SHARED, and use page->share as refcount. - */ -static void dax_associate_entry(void *entry, struct address_space *mapping, - struct vm_area_struct *vma, unsigned long address, bool shared) -{ - unsigned long size = dax_entry_size(entry), pfn, index; - int i = 0; - - if (IS_ENABLED(CONFIG_FS_DAX_LIMITED)) - return; - - index = linear_page_index(vma, address & ~(size - 1)); - for_each_mapped_pfn(entry, pfn) { - struct page *page = pfn_to_page(pfn); - - if (shared) { - dax_page_share_get(page); - } else { - WARN_ON_ONCE(page->mapping); - page->mapping = mapping; - page->index = index + i++; - } - } -} - -static void dax_disassociate_entry(void *entry, struct address_space *mapping, - bool trunc) -{ - unsigned long pfn; - - if (IS_ENABLED(CONFIG_FS_DAX_LIMITED)) - return; - - for_each_mapped_pfn(entry, pfn) { - struct page *page = pfn_to_page(pfn); - - WARN_ON_ONCE(trunc && page_ref_count(page) > 1); - if (dax_page_is_shared(page)) { - /* keep the shared flag if this page is still shared */ - if (dax_page_share_put(page) > 0) - continue; - } else - WARN_ON_ONCE(page->mapping && page->mapping != mapping); - page->mapping = NULL; - page->index = 0; - } -} - static struct page *dax_busy_page(void *entry) { unsigned long pfn; @@ -620,7 +541,6 @@ static void *grab_mapping_entry(struct xa_state *xas, xas_lock_irq(xas); } - dax_disassociate_entry(entry, mapping, false); xas_store(xas, NULL); /* undo the PMD join */ dax_wake_entry(xas, entry, WAKE_ALL); mapping->nrpages -= PG_PMD_NR; @@ -757,7 +677,6 @@ static int __dax_invalidate_entry(struct address_space *mapping, (xas_get_mark(&xas, PAGECACHE_TAG_DIRTY) || xas_get_mark(&xas, PAGECACHE_TAG_TOWRITE))) goto out; - dax_disassociate_entry(entry, mapping, trunc); xas_store(&xas, NULL); mapping->nrpages -= 1UL << dax_entry_order(entry); ret = 1; @@ -894,9 +813,6 @@ static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf, if (shared || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) { void *old; - dax_disassociate_entry(entry, mapping, false); - dax_associate_entry(new_entry, mapping, vmf->vma, vmf->address, - shared); /* * Only swap our new entry into the page cache if the current * entry is a zero page or an empty entry. If a normal PTE or diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 5c02720..85d5427 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -631,12 +631,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted) #define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) #define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) -/* - * Different with flags above, this flag is used only for fsdax mode. It - * indicates that this page->mapping is now under reflink case. - */ -#define PAGE_MAPPING_DAX_SHARED ((void *)0x1) - static __always_inline bool folio_mapping_flags(struct folio *folio) { return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) != 0; From patchwork Thu Apr 11 00:57:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3107ACD11C2 for ; Thu, 11 Apr 2024 00:58:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B50496B0098; Wed, 10 Apr 2024 20:58:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AFE4D6B0099; Wed, 10 Apr 2024 20:58:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92A3F6B009A; Wed, 10 Apr 2024 20:58:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 747746B0098 for ; Wed, 10 Apr 2024 20:58:10 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3EC901606EA for ; Thu, 11 Apr 2024 00:58:10 +0000 (UTC) X-FDA: 81995439540.29.7F3E5FF Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2102.outbound.protection.outlook.com [40.107.223.102]) by imf26.hostedemail.com (Postfix) with ESMTP id 85B79140011 for ; Thu, 11 Apr 2024 00:58:07 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=F08WxOof; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf26.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.102 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797087; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2yZDNSwFmOleRmlQ9ijesy2TEv/PIC4tH282yyLCCCs=; b=3VguRsKhW3HOR8wIlLvLBRFp+vfdN86JlZ6DllB2kRgkA48y6ykAf5ii0/df5qQXwpEOp6 oPLi9daBqwZq231+68sJr9sz4rjn79rDqQd+Zqnb8A4jCz/SScHAPiKjRyFrzul/d6GetO 9fINvLmh/7mbHA/DQJtOVu2CR8htfTQ= ARC-Authentication-Results: i=2; imf26.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=F08WxOof; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf26.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.102 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797087; a=rsa-sha256; cv=pass; b=ZZoDdGU5otJD22cTVwai3I0tIHTJiCK7NLEfUC5xRK3473/WZ9oFloqs2KXbHLczrd98Pb Jbp+zVySJumlxkFd26dfIPZcMd1LGO59z8f+FXQcd4nomaW5jjiO06EDeXcloU855GGtf7 LBuH2L53qe5jnt/Cnq9aID0KydNhMoQ= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j2FaAxTUe3sHjY8WnDecl/Q8SwIfRVJO/PW1OqZ4YYuVO1KlLFzUrOnyR35kYWP8j1q37MMLydz3RUQQ+SR07SaKPbeMp7q2JVk6wFJiv2ASwxLyHwWv0gEx++nuDfdp8HOD5yhqdtLfJ+XgyWfwEQp3peShvEe4fYtEjiaeFgBJZ/elgTXpLc7gy9tahzK44X72A8/T86CYcbHa3TdWe8rqThu2ix7zsrbamB75lRdJl1crf6bCXNqSuRUXPSwc/152uC+HsGAXxzMhqqK7IV4W26Z4LlfRaY7to1KIwWk9mCmNa52llGxcOAPuG32Tp+T/kxdAJQ/ZE9lZz8P0tQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2yZDNSwFmOleRmlQ9ijesy2TEv/PIC4tH282yyLCCCs=; b=MMSDR+uOX5857gLmXHcZ+Q1BGonH8lEpfXIj/vmISygjY4kuhyntP7Ox4p2+b4O3/yX7xde2eAzEyR7A0MIFEFn2hMAe48bjRX7N7lsVbvoUHTD97wSFMk5XQu2RkNi4Iqn/W8rLhJRgxnMc5jMASqceh0VOIAtxCPQFaMMhFK/aN7MyzvaBtFFr1M8Ab8wz6rVNlDs4rZ9B1ghKoIT7plMgj/bNk2hwBJwopMlfbMVeHiNExKjkEK+YeQ2GWLrAYkzjOu1FY1EtX+8esLK6YElDgwUuD3uqr9SqPYyFVexOabyBQ5Ouh7+VDJgHYwa/vfue4i9bjB30R1nlBSyNGA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2yZDNSwFmOleRmlQ9ijesy2TEv/PIC4tH282yyLCCCs=; b=F08WxOofz41eUBFVf8o8sSI6JW9BnpM2C4nn/a55KP1+0qiK3tmMbxZ/Oyl74C+PwOjFWnAyvRMMKPD0BpxL4zGMFoATQRc0IeeihnxCUagkEm8kVz1G43A9atOx9fx7aE8oFsu0P07rQYAtREDirMXV+Nhy/qsy0NG9atZ9M9PRzTtVa5BqjPIV0ON3L6GAMQq3mpyG1MWg+YRHNSJFkMebHr08vm0A12Y3JzKGnP7UW9plDYE/sikk6qyV1JR6cN5U1CW4/u1Ug2ylmSgR1yj3YiERaK66X7MEjwfgN/ys0gDNWCz07Z9pHrQ10IRXqdisrEoFnpwhNSLGZJivmg== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SN7PR12MB7854.namprd12.prod.outlook.com (2603:10b6:806:32b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:58:04 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:58:04 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 05/10] fs/dax: Refactor wait for dax idle page Date: Thu, 11 Apr 2024 10:57:26 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY5P282CA0131.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:209::12) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SN7PR12MB7854:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: e0ET+aGVLefGTWYo30JnMXEE3u800qBxVAayq/C7mz8dmKGyv83S0FYL97C12/LBQVbTea7kLaIO5CBGbchodsVr3czTOmJUrSZyNPIGtIULJ56iBWeCyw2H9K78oauJ17TVB8gI0DK0lMTVzZoNw7CLOr18W6GtIG7OtXqT1ITBud+ufcUilegNiItWXkO1GPJ9TEgGq8OfEvYeKXbuuPP7fWKf3WLtc9rXHmDIUruwAwQ6aXnejsBBOHO3GWtIDKgn0DgmpyqxevoNrdzZDnoutEIs0diUSN0/BZ6h6yjkBYptMt9RZFnT5N+tC7P2qXvI6ltqsd1x2FxuKJr0QXzHp1xJtuhyBjYGN/aTZ6lOIlg+jpXxhaZgZFcYEMELQV0cSMl+5SZcE9PFhpNLOjfI8vmy5WmxA8xlXCmAqiXxDoNS/vyMZtKFuC2UDdGmdD/ifhwqYwnT3WIcGZl1n9+EQQ2joGkFtPxapEAHR1iC35xxVGjcb2wrONGpfJZRQzIe01JbvL/E9FeGiwaHE4GRFVZLKPwUBKG5j4cX4KsOUoqLs2iimgwbY/F0k0vBn8baR6c8HrJfrNw6WwkCAXy+FaFltG4Ue2KLq7WoX/2KcQVSa77SN4/DpbZ0L/+5D4wynQQEIgdfKQexmzf6HPXKEucmY14TmMeOoMCR83U= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: o7kg0CV6A59ijmy/CoOAddz7eDQhDfTEMHPkhbdQG/BdGJOTvLJv5qAJWXxQsmoytrzn7i8hn/KiHoMsYDHdKprJzPzgF9MKmkCLsNRbdgQC7NPhbDaDtGKjEtPHcsDLq60W5q7iD397uTuFz04F+gIBFAv4fe0xPveQmr+FGvTZH4EIKLrYIFJypKeH7mJO5pASPpWH55nMIyYA+3QvoaM7sMGpiy6eRpqwZsXOqhMsKwwwjagv9Hjb/tMGp4G20GTeDFX+FWg+BvHPMl7JwZ4wbvgPL/sHgEjC1QT7ewe2iKqdmQmyH2G3qBaUcu+SHuzUlO2nJtSvYTxlz+Jr/Fix/Mx+gVMhyRq6gE3195lDio8BxFTzP9r9+ZwFqmkNcE/rHvLFaklIm2uplcIlj2li7yGaoksogQI9XQthF5jsunix6cCUqkbLMot+eWqINM9hn/MHma7818clEVGZlpHdK7uaSMyl/ehEhZ3mfbxWuhus3QMXhO6MZEQndIbQcNkIzL7EVhCqOMjGhlq7LQquDyPK2LK643lISsTCMYjO41Iz+9TaJ0B1iq4n3skoD3m2s9yN+x9gVzVK/xP+QKbAgFUF9i3W7rouQE5xJW3IAo/PLvjSrTpbH/yCzGWLDnc07eOZxR/zvl3Be8c5wGKB98hhqbYbfkEfZYnoezjDsb+Dl2f7p4sdwGYaHH9vcQHd2RdCYnDqltaq2jV7uxjQOnbxJC+jMD2eAPHTfsDqEzL36cyWbpIOE3s4wVwERr/Xu5iSDlZf0oBjNk6MoXxrnPypBE225Tx64XD5L7nDOv/h1KEY+KL8YOJTNHtsVjDxtXAoU+FpO/Kbm0Op6ptjN/FVXKN7p3dslYy4DKDtcFBEge2M0OnEU8yhgMVbC5gUBCnBEf+UsHrsRy+44l9mYszuPoOcibrIQ8ea9zanfcBFwaZpAOrZXJC4Oy8O8D1LeczMtLT7yvTNNCvfrKbYFWWaGUyFAz8bGy0Vsq5aSn+bG+8sTMFTCvRrM5XdDePmxg8eY5uoyTOemb9sLjXEkOzg7rrURhumfnOswD2PM5PrXBRyI38YdmMJjNpIMqByhdCTVCic/vV76Q+lmKtlZxZdWC+KBk14+qJ1sItNt2eZuZoQFwvJTv3Yw/YxpPCiMHtpWvPUSm4D30YIGhUvcTd4+zO1pscBbTikafZTL2CbHLVgKDmsvUvic0YKOQulAo0Ue+Q4En5IEsNU27aNltcWChiS9Cz4rV77386NpMXSDJpuBIC/Vu7wglqDZxORQ6HBWnAB/fN9Fr+IrA7XuUHGRTbH79EcNtuts7h8X3l219us2M6jaANucS8uaaVnAg1NRCHPa742NJ0lKDdvvZiCm5SrGgBqx5Ur8VOCvxxRSZP9rfMQwo454uoJ7UIhyYVntF913dOU6HQxLUFruYO5pDOloRrwWYEIxBf7yEliN1181Kwo04keJnhuTrQNj1t0SHeanMD/4xD32HbdyUnH7vd/flDNsVkQMVLjDbxDUiOfvAUl+9teC2YNaXW9r/YiJIduJzVIU0wSfJLlGgTDohJkYgvfYlSLgjgAdpbNCWTMwXraY03OcAcC X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8845a362-aa4f-42d9-b2f1-08dc59c27159 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:58:04.1629 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: ihpSG2u9CwbxReOsE/Nisc4VreTaEN7fccFZPhwjYo2DpMdmBiXDOn/nyYht2enfcHoHxToIvEwXuz8ny671tg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7854 X-Rspam-User: X-Stat-Signature: rsieixytbiwcjjbihpfuuz3nwkamphtj X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 85B79140011 X-HE-Tag: 1712797087-285309 X-HE-Meta: U2FsdGVkX18oGHG3wyBGRG2WBXVbcYoOssaiwBe3awhsqdbMbfwDNSjW/GTH134u4OLPGLGOT29awHz9x2P/ms4IrCejYbOsjIleK50vFZQzHwRKxZ28ePzXvW9niJk35T12oD+bDSlboMMS5twU5JHCIShne/jsaITQSerT0OdIP+W9mp1OMNbBzkbt59PUqdMPZupaEMbmNjcSLN3teo62mNsxUrazn8w2+CZYCBBnRN9tM2+9dz/S+koZBfYx06E2uYxsErMy2s4+GMeq8csMdQYHP2djQZh7lHtav2KERXS0zpsOSYnoCYNLNNMLpRyI5q0LA6wTLF4Cuq4xTT6o0ftX6D9YHD9Ha03fFuhR7ew3rB8QdAOyxKoLI8iHB0zgkofJhVHcxy8t+3cIQUS9wMRaiHjetC5iwvGABS/LP+f6Ylt69ahRPlX3LPGM1O0/PEwP4nqFqpu+4H6EAvCRJeJT87f+5Fd+laLe53QOKdlS0YYezpidOMDIXZsM2FxpREP2sJd29JlGeRq64vmXdXqZKInik369v2dNaPGgutXB+oP8qqZ0uG2n3S6blnZivGxzwsHXXqqZgENhJ5UuN3q53IOErQ7cY5PNuoy0OeywE9Nqo6DbZns+5Pm2fNu2jm4qQXdHIqkQ5O3CWfcQMw33MXmAsnTy0LcWGVXsyMHI0IukhCEsXqKUVwCtEAKi3kj0+YHmG760UNiv7cPUEfwK6pS5btgblDp+8FJVzc2QF1ODCDnN/GeFoWWT6/OVPonva0DiHnYoDNzSAXY/TIMgr8a0OpPbIXwa9AX8nIiH9pRLIMyHkKiQV98uSGB+acZp32MITW0k1sIoWYKTXTS8QQlqu8mVoSCKXfXLZxDXkmxoW1ekgBRbw3TTrqWqWYMahUtPv3+TabA67j6DVzR5GPmPdTI+AbpvBq1zcTGQgHkoq7tyGHrhXjQUzNg0gK0qCTmy8xKc/JM gLIbBWOC TqyK+MsXA7XLJT6zuE32JwdjJZHGMRiPftogBGDk5OF++WdMuSlhGgJR4LJhfzp+T39hbGOSh5Ep4oFh7yHWzuRoLrSI6wjjETSfY8DrCNiu0DJGyotNvzqYTRJrdlr33zmRUGgEN5ocoutthSR31KP0EWGAKqWOmbfSA/CUfUZCvoJY62P9XWTgm82QgIXFU9Mnkg3zew4fPrJC4eeQmntCYfjzMwNIrYPcS X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A FS DAX page is considered idle when its refcount drops to one. This is currently open-coded in all file systems supporting FS DAX. Move the idle detection to a common function to make future changes easier. Signed-off-by: Alistair Popple Reviewed-by: Jan Kara --- fs/ext4/inode.c | 5 +---- fs/fuse/dax.c | 4 +--- fs/xfs/xfs_file.c | 4 +--- include/linux/dax.h | 11 +++++++++++ 4 files changed, 14 insertions(+), 10 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 4ce35f1..e9cef7d 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3868,10 +3868,7 @@ int ext4_break_layouts(struct inode *inode) if (!page) return 0; - error = ___wait_var_event(&page->_refcount, - atomic_read(&page->_refcount) == 1, - TASK_INTERRUPTIBLE, 0, 0, - ext4_wait_dax_page(inode)); + error = dax_wait_page_idle(page, ext4_wait_dax_page, inode); } while (error == 0); return error; diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c index 23904a6..8a62483 100644 --- a/fs/fuse/dax.c +++ b/fs/fuse/dax.c @@ -676,9 +676,7 @@ static int __fuse_dax_break_layouts(struct inode *inode, bool *retry, return 0; *retry = true; - return ___wait_var_event(&page->_refcount, - atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE, - 0, 0, fuse_wait_dax_page(inode)); + return dax_wait_page_idle(page, fuse_wait_dax_page, inode); } /* dmap_end == 0 leads to unmapping of whole file */ diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 2037002..099cd70 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -849,9 +849,7 @@ xfs_break_dax_layouts( return 0; *retry = true; - return ___wait_var_event(&page->_refcount, - atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE, - 0, 0, xfs_wait_dax_page(inode)); + return dax_wait_page_idle(page, xfs_wait_dax_page, inode); } int diff --git a/include/linux/dax.h b/include/linux/dax.h index 22cd990..bced4d4 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -212,6 +212,17 @@ int dax_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, int dax_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, const struct iomap_ops *ops); +static inline int dax_wait_page_idle(struct page *page, + void (cb)(struct inode *), + struct inode *inode) +{ + int ret; + + ret = ___wait_var_event(page, page_ref_count(page) == 1, + TASK_INTERRUPTIBLE, 0, 0, cb(inode)); + return ret; +} + #if IS_ENABLED(CONFIG_DAX) int dax_read_lock(void); void dax_read_unlock(int id); From patchwork Thu Apr 11 00:57:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625228 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA2C3CD1297 for ; Thu, 11 Apr 2024 00:58:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 708CA6B009A; Wed, 10 Apr 2024 20:58:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B8FA6B009B; Wed, 10 Apr 2024 20:58:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E49B6B009C; Wed, 10 Apr 2024 20:58:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2F6816B009A for ; Wed, 10 Apr 2024 20:58:15 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F0EF81A0941 for ; Thu, 11 Apr 2024 00:58:14 +0000 (UTC) X-FDA: 81995439708.26.F974EF5 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2124.outbound.protection.outlook.com [40.107.223.124]) by imf15.hostedemail.com (Postfix) with ESMTP id 44B97A000A for ; Thu, 11 Apr 2024 00:58:12 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b="szgG42R/"; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf15.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.124 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797092; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jWq7sIsAjoy6Q8qfUd969PXNtY0sRneqqLCOZHWtLns=; b=7mfYiXGm8YICj72Uxa76EU4lnDA1vrdynw3RQTFgS1C3KONo4OjtuFmDDs8sE+xz5aHZc8 otEq4QP3F8X1w2jp0jMCphdOBQvMAa+G30oPyCZRxOhM7npsu7Q3u163QlrDr1jBK+vbZs vUILyJP7v7bb3oTwdaeXlvgpdWeYcgU= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797092; a=rsa-sha256; cv=pass; b=0dJLpm9c5wWjPkD0Cfk6b4SHInEJF9Sq5g+bhcxlsa8+os8a133/1HHOj0UQvAFKiZ52OJ dRz2EVaoLl8xSGY/SVI48uoYvpOdZg/ycceHvv/W7O6FUDQdOudsXc/IMejN0myOrGLH0G u5jmww1GEZAk+hFYt+yeKKIq0YOgeOA= ARC-Authentication-Results: i=2; imf15.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b="szgG42R/"; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf15.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.124 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=R5DYjnX0apxHf35/VxJoFF7rkXAeqre03t7VuUoTi3NtzYO2owJqwnUuuYORDzr8Qv86TyJbnrqharMZwLqmK+kiGaKAJHh1bLKQbF4Fn5YSO2mCTeTkd345r44QLhO5JwzVxuBVHRKR0Yjz7WDQprLVcELbM0FnQl/yhg0nGvOuXcbm2DFbGXUErKRKR2NX+u+tK1IiyRZHzDo44ZL6MizyHh9oFN04hDyBmLF2y0z/T47Rd+CtUYGBTn1Op9SZfLQIgSrJaX5Wr7MGxrkryHUElfwNW12dviF+OUiY3xEVLFjmKveb8h9HiNm//l4vLa9fTmz4sKhgS7uggwoHJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jWq7sIsAjoy6Q8qfUd969PXNtY0sRneqqLCOZHWtLns=; b=RLdtwsXyOFCH5T0yyXzuLDU0l+H/jt5pSrLvV+0+i5GLqq21YT0B5ZyYM6HgSZWBH/jsq0sf4dX8C20xMBFMwoUZX45Fu4yIgOzFF4QjqxkotnopibtkaVHQeQ+7YTCpG1Qr0+5pTvcAOGjykaDs7F5/Y/4wvMgSkDgSVmYv0WI2QC1pF81jn2E56kZtovsDCnmtHvS4qYHzciy2RqjfV7P3iDyJn+rTHhuYMVV44xR+zZrOEVno0pEQEvy3ZMeCsoxE7bGXrR+T5B4z650L5Wyf75f6R6SruopWZcmS/ckjGzH/A98y7+2IYILh6+qshKekB/8GlNfCaHuEulQWXA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jWq7sIsAjoy6Q8qfUd969PXNtY0sRneqqLCOZHWtLns=; b=szgG42R/j9M9nCh4I9XtGL2akKPtJ0cS+BgO6qry24U79FDmfd4GjY67roQB8re1riD2b4etQCLOphG0EP0bmQtF1NmYbRJ/U0rSSDQZ7culKq26w9vp7EzBE0jlhDHQOfNqnhD8pnvLXfGo8qkT3C2+nxnTAkPRoBORwlEfMFYo5fZurCyECBoa3gVC5tybdjD9P8dMRLAq5jpPZgCuOmDr24imZoBFwTQ0gA4/lwAxw3S5OoTB2YIV6Qxnno8elfkNstR7wGgMgi2eTtDgqttgwCXaI83GZ2TAjxVG7NJl+PUnTFBQBgG3ed/j13ynLD+divxUe7oeT263OW0J1g== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SN7PR12MB7854.namprd12.prod.outlook.com (2603:10b6:806:32b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:58:09 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:58:09 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 06/10] fs/dax: Add dax_page_free callback Date: Thu, 11 Apr 2024 10:57:27 +1000 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0120.ausprd01.prod.outlook.com (2603:10c6:10:1::36) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SN7PR12MB7854:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: b5iZ/h/44MWpPBnZwfbSDpgSF88KmGFRGXL1kx6IDRAcfk7S2uR9dF6nyz7bgXS7yGFKhho1jdW6C92DmEjlsf3I5ATMxwHNZXSHwVU219G3gLw/nXNEpQqIu1e4Jr6KLH3/8Spvu5EoqSYpXbkYvLocpjRlNdmy8lQAGrwfIYLuofDZ+Z13Akzs6te+ha+hd3mFaPsEPikqQeWiMkfGRcE4ajnQQfbVCd50R/IMbuashKyECmTAbMKJCcpOxVMAxaNNlhQXLeEW3JTxzD5d309YkhFQwoo73c2y+7SwXk4ak6fgj5cVwJ3MX0bMAVGv1B24L4h2AlNG5E953B/LfFi3pJ7QL2w+mLmJDJG2vf5icI8H7FIz1ak/YiSci6hkujoCn+CkSH/LUdr9De22YnbkWxogu2wC/vlgaZxbmd7uv5HxI2Gi5cmvMI4VMqwV7unnD+EM/THjbGyLVcaBnLDpzeTzEmifkEyD4Z/n68m4UASifwc3g4bAwHIBCG4hsjdLrBRX0c5p5K6Mc+6spF+7qf8onVQIiqaqFI5TGgifvmNFmZcc7jz40RwjLcVxHDF3oI61o56oQYNy1XTD/BnmLBJ7UsNCBl9rqkmcEXxC9OlJ7oBwliqhxVGRKuq0tJsG7VLnuti1wyCDF3o8/HyTP1wDF8/ULT70ModbxAM= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Y1djZlK3WKvLZX62X1x7MeSTDIua8cX3Gkaz4MyEAIBoQRQC5ut3+Flxi4A7TG6cq64Tqi7GME7hQZpJZxASnGVthA1EF4H+1aC1UH1I9zjEFHFna187jmFT/PIGdxrG4SC2u5GrsTYKxD/YN+iUQWvn946cez+/zqUaTT2Vkd1DzDBy6UmVm5noEsx0No56g3rLQhvE5u+7VKw/ylOzeRFvOspY1QT1zYEtw9s+drBHbcXaBY1/4jCy6ftiVFYv7HPJ4jYx7nkBiBtqCUGyBEtP/0UKRyzjWUGslehg/tz6X4WsL4qZjs1BC7/FmYXFH2n3ZxCFl0ZNJtJx+LMLP2VfB/hvX9jBXMNDJBBeO4ykmOuE7+7k1n6+vJ2kU7wMmKTapHA26jDd6zyoUPrkMzWpz3C9oaum2mped5AKN/hch4dUxuFmnBsRCbwZV5WrCHSxyfUThCVDdwRNXnN6JEqAP/dQMK+V56up6DoJP26+HLvyJ0MGxTTugtzE/GZHOJtamDNRpwq3ss0aAm0nwjbMdm1J8iGWEYG0sSOEaJ1puRCn3aZ/jpgKDc8EdecFl1X9PItnyb4M8hSlupm0/xbfPks4xR3Q2rco+Qy1UdLgGkuxCDhpkjB64U86Mor5awtwMrMxMSVA50Dw57vluvwsz4u/umWnQ6ueqFlxYDVWlJS0BA2YJsX8rRzdLSgvOhyYGnRNs+CUEgkBOx5FptmNFDxnlM2qsKI1Lecn3x8KD+hpks2V6nFTOC6AFlCrjztj1daNnCw4NG9iG5ChblVgw+vROQLrWGIq/8ZnPU4rRU22cXk0nlvtqRU1q7lzegymCs17/HB2TgNK8PddeXnRuvgrWbyuTBjn9Hq/TzlNGVrMv5jCiY3zpdOEvJ2DSQXlIE9HR6BibR+GyuWivxbtjSSbPIvrmN/9NityEXWTCnKPmxmhdh91QUBx36WIEZmyoZwBvJFAIa95au01j7kx2VrjItPLm+LQVDsBMECQWNdj0V9CYTDPyfoWg0g5aPn8H5S+zI1Qeoa3KwfYY1Npn/4v4lJW3sBdbGv9Tuhr+Jb+nguCa1k/FMst+izTJAsk5EocIhhZow7LGalCvN5kYnkV9aY6P8izvuXxHLbcxTR2aKWIR1g8kg07W3IRX8zXLYWg8pJZbALiYc1LrKvNTfOlXF7TwZ9dM0q7pQDZH0tlrPB7tuIALQ8FVg0wcjIfb9B3sLlh1+saKe/G46JdXcnEHqf28ooZILbdrGa4C7Gy7Pq1+R36i73kK482ABaGcmv0a/GWU1ARfWMkSposx4VM+bDWvNARMuwalRvf4faNPvbiGMb9uQ+sOHk0s5TnzUY9YuktlZo0pFgM3+dB0BKsyGf52sz5B8C0p31zdlwat2VR/3xlv3axSzRILSxDV6ECjjK8lRz/skeUsmltc7gLwDSCwvtW7Ch/4qzoMKGF9ENFdKopg8+pblVYINvNqlB1BFBU7JojrJcbXP52OJLyPi/WhQMcNOoDcHEzlNN1ADYHDcq57hAL9WShmeg30oCOayQk/p+ch21EzmZ/U2QNv0u4HNGr7KroJZca4lAJtRozL8p8I1Lqjy5c X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: bdc425fe-c4b7-4c63-3363-08dc59c27467 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:58:09.2440 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: +wgsujitMp4RQ6awuizuDAa0W13m4lfGYlIt1Ni85t043cRoI2+lAUH1v8gQezl5iK7zfyPLtZr6J3ltwJJUDw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7854 X-Stat-Signature: d6ez3k6m6wgfnpna7fwffm7pwmybcabp X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 44B97A000A X-Rspam-User: X-HE-Tag: 1712797092-307305 X-HE-Meta: U2FsdGVkX186sZpnhvA6b1vpZ0k+x+89DUYEesViwCvxCUp+q2dri7bw6HpjeNI8jY/8Iv/g0h9PVWWCAYR+F9AQFBW+7CS1eJKS10Oyn5yf3OkRm9XIJYOirGX0ofVSePvQRn3ZfCx8GtMBXnUpUDj2Zs2IO35N3rk2QZxDZcPFgRl6YoeQjIFp3Tmu35KzrnsvifkLvohXZszviSJZ3Qy7NSw0wX1KDkHPXbNWhs+MjI3EEK76Q6wVFGcMq9NQVLgBpItb89GKTzsdILEehJhK2CEDNjv4yzDdeqBF//xpSJauTfB3lm+069pD3Ux6EJCGHccQIBCa50BlFi61qvHS0z198sqxumLfgvXJ6EM5IcWAZi1G9sn8uPuwGYWoqHXr9aOx5KvjF3Bs7sDZ++qNT8PMMAh3ywK5zK0JXVMXYkGr6h6B6qXEqLpuTaxVqHa7NIZXfhV01My/dH7ZvR+847fnVZ3ueQTYLmQPCtS3zWUmVQqIamVHnYkInXnDxCaT4INiIj9FN1vlOu4vc9YgtKqOq+E8Ljes7kPmDhU510ab0S7NQdyA15so4euJMRtNwO1SeWlUTO7jCSvOPUCGi7oG270tYqIHATr/xnCkl8VxSKkkX3Tggh1Zp43FCoQK0cXNXBQFfuIRl3zfDt/EG2psi6IGIgtgHGZNbq9g16Zn7pF/wl/pf+jzPw0iX1+B/ccv4pEzNxFio6X6PAZJ+5LR/hdvPrgt28LdWbP4GLP8NKGI1LXhuGJo0DZJshMJJZObbpSjUAEgRRzSOJNcgl7E8gbJtR2kgttDG/1WlJknNsWVCThmPo58MfNVIeE0fgzX10UuD0rlmm891SoQHySL50XGFwe2LteMQK9XORGXTpJLU23IoY0RHz5NL8pHP2j49kpSWMXrbau6FUAQY/ESaDrMF3RtVfOAN0EGoS0xmjppjqN4IEYiubvHepRIY1WWq4eJN6Iwf03 Pl2eCZLO YHFo6WiPUJxptVYwLYmxvMX4Edl7UWjghzcU/roMkXukOEA0wdgH7nRIH6ptgnguA6mNb1eqSeqpY9Q0SgnhazbTdsQnjvdJ9dStDBQp+q34O/PE6NKS4aKYGCp2xDIPRPiovfCQRSXhWc2pu0wuTS1lF0uoM1mhAK1vLAo4ieJWI/w/7tTymV+oJmzQsk0xQ/oStfS9OuagbrEspK3oAMsgxPWiTSgxO9AKNu9nb6I8PE73c3FFUH/g+VvoVrHWV+K0W5mRHf1dFVm4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When a fs dax page is freed it has to notify filesystems that the page has been unpinned/unmapped and is free. Currently this involves special code in the page free paths to detect a transition of refcount from 2 to 1 and to call some fs dax specific code. A future change will require this to happen when the page refcount drops to zero. In this case we can use the existing pgmap->ops->page_free() callback so wire that up for all devices that support FS DAX (nvdimm and virtio). Signed-off-by: Alistair Popple --- drivers/nvdimm/pmem.c | 1 + fs/dax.c | 6 ++++++ fs/fuse/virtio_fs.c | 5 +++++ include/linux/dax.h | 1 + 4 files changed, 13 insertions(+) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 4e8fdcb..b027e1f 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -444,6 +444,7 @@ static int pmem_pagemap_memory_failure(struct dev_pagemap *pgmap, static const struct dev_pagemap_ops fsdax_pagemap_ops = { .memory_failure = pmem_pagemap_memory_failure, + .page_free = dax_page_free, }; static int pmem_attach_disk(struct device *dev, diff --git a/fs/dax.c b/fs/dax.c index a7bd423..17b1c5f 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1981,3 +1981,9 @@ int dax_remap_file_range_prep(struct file *file_in, loff_t pos_in, pos_out, len, remap_flags, ops); } EXPORT_SYMBOL_GPL(dax_remap_file_range_prep); + +void dax_page_free(struct page *page) +{ + wake_up_var(page); +} +EXPORT_SYMBOL_GPL(dax_page_free); diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c index 5f1be1d..11bfc28 100644 --- a/fs/fuse/virtio_fs.c +++ b/fs/fuse/virtio_fs.c @@ -795,6 +795,10 @@ static void virtio_fs_cleanup_dax(void *data) put_dax(dax_dev); } +static const struct dev_pagemap_ops fsdax_pagemap_ops = { + .page_free = dax_page_free, +}; + static int virtio_fs_setup_dax(struct virtio_device *vdev, struct virtio_fs *fs) { struct virtio_shm_region cache_reg; @@ -827,6 +831,7 @@ static int virtio_fs_setup_dax(struct virtio_device *vdev, struct virtio_fs *fs) return -ENOMEM; pgmap->type = MEMORY_DEVICE_FS_DAX; + pgmap->ops = &fsdax_pagemap_ops; /* Ideally we would directly use the PCI BAR resource but * devm_memremap_pages() wants its own copy in pgmap. So diff --git a/include/linux/dax.h b/include/linux/dax.h index bced4d4..c0c3206 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -212,6 +212,7 @@ int dax_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, int dax_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, const struct iomap_ops *ops); +void dax_page_free(struct page *page); static inline int dax_wait_page_idle(struct page *page, void (cb)(struct inode *), struct inode *inode) From patchwork Thu Apr 11 00:57:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E84CCD1297 for ; Thu, 11 Apr 2024 00:58:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B9D806B009C; Wed, 10 Apr 2024 20:58:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B4D106B009D; Wed, 10 Apr 2024 20:58:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EEAA6B009E; Wed, 10 Apr 2024 20:58:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7BE4B6B009C for ; Wed, 10 Apr 2024 20:58:21 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 373541C0CA9 for ; Thu, 11 Apr 2024 00:58:21 +0000 (UTC) X-FDA: 81995440002.26.63B409D Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2091.outbound.protection.outlook.com [40.107.223.91]) by imf11.hostedemail.com (Postfix) with ESMTP id 6859A4000A for ; Thu, 11 Apr 2024 00:58:18 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=dDAo+hNH; spf=pass (imf11.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.91 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797098; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1/h1EMLVK7NFUnKLypFO2MBHSY7yBZMT/qNzpdtdW1k=; b=FcO1btBIsOMmN0yzJf/+fmJtwyvo/D0Bq/B0X7hO3yzK46YIamPn0wdEpl7I/fIhVIOKsq TLiXkxkT+Sbw45/QpbuGo9YzFRytrkJQ/mo9bxrbh7fTB6wa6mm9qiMj2dXgqjKz4PM3I8 BkwhN9VCmhETGczn16XEIrFBvqUqW/w= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797098; a=rsa-sha256; cv=pass; b=EzwwAkr7x/f/Tf2EpghqkgDwvnKoHXhXJKq8sDXzcS4ATAsZQN0wfhyncKALtA13Eo6Jg8 UM5aRlM6R7DWNRGIu+MltjuULRoLQdZFVDNZKAQ8xGF6Mszoj1eUJMFeMifj4pOBWnX1vM huEcywQI89jR5G10vg4MgkQhcmCYPVs= ARC-Authentication-Results: i=2; imf11.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=dDAo+hNH; spf=pass (imf11.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.91 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eKTmaF7f669jxRk7ICOO7chK3TtbkyVjWUTXeazbfU7asGVcNrSJoGI6OFlhbhj3Myse6gqf7fKOA6HV+sbMv7YSSfcG740eEMYQwDx7caW9x6MhQty/Xo/ZbFnB8YeqvzkBD1xLSF5nwlpq3yiaqPbhIWah07TCf/cQC5mUesrX8t+kcpqvG9CCM0wMdvehueBly/dBsOn6Bs1sVtJ9DdZ+Odx7wXNJg5nx+c9E3tHIhYxa8VFhYRHHXwGOvihyhzlw+ngUHHagzNw52EIMa6ymXnTFGruwqydQ7Qe+EzHu61lpPtNWHVCqgZjIcfsggE2PA92vnAhcrdVQDb7Uew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1/h1EMLVK7NFUnKLypFO2MBHSY7yBZMT/qNzpdtdW1k=; b=lX2/O5PoYfYyTkzzLOwbXNsqiaabN1sA5BTT0jZWCVERiniVTsUyCy+Vx4d0R8LN4+p2vdn99CsVcScIIhKb8M5AIn1iR6iuIdaABsJabwd+xDoHIL1DYCifCB0tZUcuZIW/P2+IAKRDEyE/5nN2JP3pfwdrb8HSuHqeafhR1Ih2kZlsT8/392bbkUF8jrEh8YK5M5VE11Lz6P7pNnPla8B4W7WAdzUMmIcuPydlK/fR13buLK40gp2wx5sJP0WDH0fKvSm3rXROxBqrCT1o5oFJoyqmQU8MMVnMyX7AZ+NTzywxTVrghcJlFspCZPEE9fuehhJPkStXWpqwn55iBg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1/h1EMLVK7NFUnKLypFO2MBHSY7yBZMT/qNzpdtdW1k=; b=dDAo+hNH1janu5rj0jcZZAh9FhyGOljehJdeYoXun1CduYD2NrqF3/0doydcOpIVOaRtjm6oTlFHDUh2R4VQbq0KNvlJx5jg1zIx//FUmtUj6+Li9MhsrqJUN7+prc6fJvnvCtH87ysYB0TjCVdgrNSx/UjgY8E33B8fYoXbMXhCWTV7cCiERiwaVp7IhThNM8EmzakZ9wwlkjGnpkn2hyn/31FGgAvZnlb6oScQK1E6i66skzMtZAjmaQ3EwBUAPjUKCPR99LT2Z5Qo0RsILm3BxJI2U4biApY4qMWoexamLYX3bBklgeYZ/POSMqrPjqQexUVXxClR6Xi371lWGw== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SN7PR12MB7854.namprd12.prod.outlook.com (2603:10b6:806:32b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:58:14 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:58:14 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 07/10] mm: Allow compound zone device pages Date: Thu, 11 Apr 2024 10:57:28 +1000 Message-ID: <9c21d7ed27117f6a2c2ef86fe9d2d88e4c8c8ad4.1712796818.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SYAPR01CA0040.ausprd01.prod.outlook.com (2603:10c6:1:1::28) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SN7PR12MB7854:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: O9x3iI3aOfgHlL5it54LdTb2eIS/EWbVs/IJ/rDy3dbP0wPdpOKz13qDydyTsA0kGjG5LyNphLr56a24lGeoKhwKYQIhDEEEZWxgYxDzRYBi44X9WnB3G/N0vu6HYRX8eYsvHkSBkF6MO6B2QIAEEiwSS++baESirP/xbOpc7QWdMfMmkYcK7+P4sK/9pd1AUoBXvRfPaD//pRVVi/Z6OC3l8s8o4U3rVL/LRxcC/Qct2dY5GpSn0j1nUyRdtoIPHRgpJkuI4WdfDIF53hAVW8aeBEWKTumQ0MNw35W1BwB1saRLAqnfC3EFjXqKVuRvyCwfNYozQDT4QHclRae54aCjiSpxMREGrq7lKl+om2v2phPR1gvW7YVp8Z0JhlgVTnUVskVl36RvEfKxtb0N6GD2P8XfVQVnxLexxKj/ngodoa5qZr3bf23Z+hBdnKu930RTALAIV7vaih/D2if6LDsv9cTIUIiFE00Khwbtpomnmp7KZgUtEYvnfJqmEzXXQcMqPzJEUc07JcJZQN96j5WGwPnd+8NKesjGCclVCEvRaVJy4VY/4t276TSyudHP/Ijm1+YMNTkjxid/C9/agop2rTVrckhen2ZLTEwmOzzgTMKi28KlhUqzqTXNDUxxwetx7jNrWILR0qb7MYeemlNq7JyWsOWqEj2nTzynH8Q= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: F3WcgEB7AEpVAOiGZoG1MIz/vDLkJJ+7h9HVUYcy93LeU8XqwCPi1Sc/nD0AQMjbl/ILFBNPz/i/9tdpjHje18z/S0EBw49i7hvrj2yMlOICNK1tCq1geCfUUR/UW59WhxV7kR0OhcLIDrKfCKmpzfOfiKsaI0EZB6/nvCSO6xVPKYLzDjQybrIEKlIEyEa+hm/yu6/p2/sK0ywbZru3lUEPTfYPh0E9K+AIZdNDMl+kpmsEMG5MA0+g51D7rJd2u7agaLB25BwIh3+x5LTHwgewbN6KUhkDtvC+c7okGA73ye8tup7Z6lL8rO0Tb/j+iKttXcn5MsDcz7NEBuRs+Q0jbIEsciJlzUJm79oxiOta3HYXzLn+hx2Qt3ZTferlsPvyI8GRa/34MbJ7NzdVZL6SVNOi075C6wdmqJf6YI6WVOotSgH8HSH0Fjxs3mO5gLSi/d9Hyu3scnigKxS0Vo+n+pe93qqw8Immvwl9HHfbt5sMk0So9l3YMnCiF+3sMQugJFdes7S0NqqlMrvUzHRaBBGcqocp2+PhnVcmzRSnPAz3nWZclQYBI2aa7lJPuT61SPGRJ/fIjC6tQZonzZNNyLteDmPMHxDFzFNszpf+VUlHko6BogDmOqYkicJbdV+Q9kzZd/CqThpRxA9bYbcUQWUyep1G8vccvesb6Ra6m1bW0HGLElVqBCgrr7iQb3Om9jQqqMGRJnPD8MGWw57qxVhGvVj4kCAJwtQzDQqExXrtbNJ2egj2Q1UqJGdFC9aa9W2IhS/mBoaFF6WKEPGgbOz/UL/jXc62JDDNtPA1oaMUDxvHiOJgwsyoi787NMD/gfYM3viwjUThzX1xS3OjkKQ2cMLzFlL9E06aZh0V3vGkZce9YdYYPgKiBl8QHLR0asMDz7Hbj1FYwsaMGhdntKrA/m7aXoT3+PBGyjlK3TL0qww76CPoxBVF3ORopbQktSQYD3+lOijJbcNgjmoQedWLgGymJCzmn9qS3S0fi+i0l7kSMtbYsM+BwK3F4SDTa6cF2qNjUAi+c5kxqxcEOQdg3F7ccgJs9eU0y5UXqdFbj26aNZZfQ4U5H+bSSS4Ju2MKirlVc73BNHiq9on5Lk43DPqc1wJEFtt2jmAOL1v6KBL6lgrhLamyKcbRMAeyfS+mlrPL+gc4KtCUrjkMwraCa51Z+2cXO7ft2Cq7SEH4MaAZvHaypUn3s219AxSlQpWwaSV8zBJhsAmwVH+yIg698fNETGHfOqauL/z1iqGuO1vhRTl8H0zC5XCWFpv82MymZNpEx8rOLRGujbcOPLjB9E5zZ85tY2qAqoZs2Od680f0Ec9FV7pXp65xtblLzgChSkNPJd8P94Xq9MkViZiGtsMtF6kExs8sCcpzMd3jeeG3c0+7TybHIInlEEmTkRJblRstGYOu6+mKp4IY2Znpj9Z64esF5IWdcvGVASt+rSCf0ro/olgwZoYKKepLNGRmsLJTgofwA7aMGbLVo6rq+MWLmXD72p2yOPOpp4iWf4I8t6UyWId1TRwAgZhxkJwjfCSeQTDYi2zrBNpCv2LYgdN5MgsgArhsrL9WZ4d9UQ40p7u5nixf35fm X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: d6ec5858-59d6-41bc-b7d2-08dc59c27752 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:58:14.2971 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: k2ELjnK2n3whUC0cjrcjuzrdoIm1Cny57oRxJebZl3WnKEpO48XrdA8pgS3Os/3t3RwwBiHdeqOz8nIZuB/AvQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7854 X-Rspamd-Queue-Id: 6859A4000A X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: eguej4u3wm9r6547y69ko5w47ijzhjh5 X-HE-Tag: 1712797098-491013 X-HE-Meta: U2FsdGVkX1+AVV+EsgKDhbo7c/yp9ZhqufuO/26Lle4MsIL3M++V1BcY7Jk5OQmbpGEsQqTmvqI+KvX2S/EYc8g03Ge81jiKoTbMY2+JOLmvCLOiTuGUOdJejlG8Puph7wVV8OKfdy8jvjDaWCD0k1KmsQd5z3qbdEoXd6zSGUnO++5dndMDGSol9xVNQDo0rZ8EYZzAgieTDw+r3bR2iN1ebhEQ4cFQjD2Yj4/l1ALQVEy1T/HRwXLVVysrTfufqmlvjB48/UU4tmlxx2Zf3FaOqMBvrZJEDA4j6AvhiCRuAHikRIJGK49+Kp4s11Pm/vgoJDW9GdnS1Jb172rOSHQsvmpbEFZ1iA8OIse8nZYnGL2+5TkLntP0OhlLPnCk0DMj6ELOkreCy0s/1e7P+xSUwohRTYzG8r0GhNqux2vqRm0kbx0Lfo1YLrZtlbb7KHCFtc69wKlPf6dvXigIry/2wCUbhsg1uf/Lf4jn0WQh0Eiqsh7Bw60BXW672ggQNEGugRdwlemYIDHpIeTr2TwwCKd5SXSAUHbYXKh79S3Al5lfiA2v/iFty6G1oVhrwoCexGbOJe3319FSezDm9dwKcdTzwxY8pXYInSaNwMGqAwGUC7k/EyYi51n+Jyh16iU8oWIL+Lp1BHNApMzCJmCA85gdVVUptA1mSE8EGaGeS6yQFbbOCfxj/+76kQM1XPQ4F+31vxlTMaVoPzDaqC+jhuGnL/TzUovGuCjsQfOpZ69fHNxlJoy93sIlgKtN5qZytx2pQQSATtr6/xReV9g5KxTjCNz8tabETI/45UIAdi3NLxqioaCah1ZrYuZjvXGJ25SSayTVcefb79bvPLWsbqxavG9izTtjMMBFYYkzwrRVdCayMVglXEsPdkRfuOY7u4XAOeFNI6oJp1DNsrxDWCc1KvUgdgVJ0Tla4O7Wc+lBeWFomrVGFTITan4YSxIrJu9x6ydegC8Yfth oE6DQ0xA Sn6Cw5ixNL0aJb/PObLflmwNzO2yS/wjnvnqswXQkXeRprZtHHs/SjSaT+6qlvJ7tlrubSO09aPqkjDW/Y83XG8o2YXyi4QRXjNyPNlXIs5tJpO518zk+J9Yd9ifYz6QQFpBq9qzd8KfIL8uzawlit8I3D0wCNy6l34+o2U0VBgpoye+cQeOCdMj3CgATM1bzniXlzV9pJv3aYmRG/1vXALyhTA7vgtr44GK9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Zone device pages are used to represent various type of device memory managed by device drivers. Currently compound zone device pages are not supported. This is because MEMORY_DEVICE_FS_DAX pages are the only user of higher order zone device pages and have their own page reference counting. A future change will unify FS DAX reference counting with normal page reference counting rules and remove the special FS DAX reference counting. Supporting that requires compound zone device pages. Supporting compound zone device pages requires compound_head() to distinguish between head and tail pages whilst still preserving the special struct page fields that are specific to zone device pages. A tail page is distinguished by having bit zero being set in page->compound_head, with the remaining bits pointing to the head page. For zone device pages page->compound_head is shared with page->pgmap. The page->pgmap field is common to all pages within a memory section. Therefore pgmap is the same for both head and tail pages and we can use the same scheme to distinguish tail pages. To obtain the pgmap for a tail page a new accessor is introduced to fetch it from compound_head. Signed-off-by: Alistair Popple Reviewed-by: Jason Gunthorpe --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +- drivers/pci/p2pdma.c | 2 +- include/linux/memremap.h | 12 +++++++++--- include/linux/migrate.h | 2 +- lib/test_hmm.c | 2 +- mm/hmm.c | 2 +- mm/memory.c | 2 +- mm/memremap.c | 6 +++--- mm/migrate_device.c | 4 ++-- 9 files changed, 20 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 12feecf..eb49f07 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -88,7 +88,7 @@ struct nouveau_dmem { static struct nouveau_dmem_chunk *nouveau_page_to_chunk(struct page *page) { - return container_of(page->pgmap, struct nouveau_dmem_chunk, pagemap); + return container_of(page_dev_pagemap(page), struct nouveau_dmem_chunk, pagemap); } static struct nouveau_drm *page_to_drm(struct page *page) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index ab7ef18..dfc2a17 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -195,7 +195,7 @@ static const struct attribute_group p2pmem_group = { static void p2pdma_page_free(struct page *page) { - struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page->pgmap); + struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page_dev_pagemap(page)); /* safe to dereference while a reference is held to the percpu ref */ struct pci_p2pdma *p2pdma = rcu_dereference_protected(pgmap->provider->p2pdma, 1); diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 1314d9c..0773f8b 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -139,6 +139,12 @@ struct dev_pagemap { }; }; +static inline struct dev_pagemap *page_dev_pagemap(const struct page *page) +{ + WARN_ON(!is_zone_device_page(page)); + return compound_head(page)->pgmap; +} + static inline bool pgmap_has_memory_failure(struct dev_pagemap *pgmap) { return pgmap->ops && pgmap->ops->memory_failure; @@ -160,7 +166,7 @@ static inline bool is_device_private_page(const struct page *page) { return IS_ENABLED(CONFIG_DEVICE_PRIVATE) && is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PRIVATE; + page_dev_pagemap(page)->type == MEMORY_DEVICE_PRIVATE; } static inline bool folio_is_device_private(const struct folio *folio) @@ -172,13 +178,13 @@ static inline bool is_pci_p2pdma_page(const struct page *page) { return IS_ENABLED(CONFIG_PCI_P2PDMA) && is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; + page_dev_pagemap(page)->type == MEMORY_DEVICE_PCI_P2PDMA; } static inline bool is_device_coherent_page(const struct page *page) { return is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_COHERENT; + page_dev_pagemap(page)->type == MEMORY_DEVICE_COHERENT; } static inline bool folio_is_device_coherent(const struct folio *folio) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 711dd94..ebaf279 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -200,7 +200,7 @@ struct migrate_vma { unsigned long end; /* - * Set to the owner value also stored in page->pgmap->owner for + * Set to the owner value also stored in page_dev_pagemap(page)->owner for * migrating out of device private memory. The flags also need to * be set to MIGRATE_VMA_SELECT_DEVICE_PRIVATE. * The caller should always set this field when using mmu notifier diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 717dcb8..1101ff4 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -195,7 +195,7 @@ static int dmirror_fops_release(struct inode *inode, struct file *filp) static struct dmirror_chunk *dmirror_page_to_chunk(struct page *page) { - return container_of(page->pgmap, struct dmirror_chunk, pagemap); + return container_of(page_dev_pagemap(page), struct dmirror_chunk, pagemap); } static struct dmirror_device *dmirror_page_to_device(struct page *page) diff --git a/mm/hmm.c b/mm/hmm.c index 5bbfb0e..a665a3c 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -248,7 +248,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, * just report the PFN. */ if (is_device_private_entry(entry) && - pfn_swap_entry_to_page(entry)->pgmap->owner == + page_dev_pagemap(pfn_swap_entry_to_page(entry))->owner == range->dev_private_owner) { cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) diff --git a/mm/memory.c b/mm/memory.c index 517221f..52248d4 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3768,7 +3768,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ get_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); - ret = vmf->page->pgmap->ops->migrate_to_ram(vmf); + ret = page_dev_pagemap(vmf->page)->ops->migrate_to_ram(vmf); put_page(vmf->page); } else if (is_hwpoison_entry(entry)) { ret = VM_FAULT_HWPOISON; diff --git a/mm/memremap.c b/mm/memremap.c index 99d26ff..619b059 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -470,7 +470,7 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap); void free_zone_device_page(struct page *page) { - if (WARN_ON_ONCE(!page->pgmap->ops || !page->pgmap->ops->page_free)) + if (WARN_ON_ONCE(!page_dev_pagemap(page)->ops || !page_dev_pagemap(page)->ops->page_free)) return; mem_cgroup_uncharge(page_folio(page)); @@ -506,7 +506,7 @@ void free_zone_device_page(struct page *page) * to clear page->mapping. */ page->mapping = NULL; - page->pgmap->ops->page_free(page); + page_dev_pagemap(page)->ops->page_free(page); if (page->pgmap->type == MEMORY_DEVICE_PRIVATE || page->pgmap->type == MEMORY_DEVICE_COHERENT) @@ -525,7 +525,7 @@ void zone_device_page_init(struct page *page) * Drivers shouldn't be allocating pages after calling * memunmap_pages(). */ - WARN_ON_ONCE(!percpu_ref_tryget_live(&page->pgmap->ref)); + WARN_ON_ONCE(!percpu_ref_tryget_live(&page_dev_pagemap(page)->ref)); set_page_count(page, 1); lock_page(page); } diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 8ac1f79..1e1c82f 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -134,7 +134,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, page = pfn_swap_entry_to_page(entry); if (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || - page->pgmap->owner != migrate->pgmap_owner) + page_dev_pagemap(page)->owner != migrate->pgmap_owner) goto next; mpfn = migrate_pfn(page_to_pfn(page)) | @@ -155,7 +155,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, goto next; else if (page && is_device_coherent_page(page) && (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_COHERENT) || - page->pgmap->owner != migrate->pgmap_owner)) + page_dev_pagemap(page)->owner != migrate->pgmap_owner)) goto next; mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE; mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0; From patchwork Thu Apr 11 00:57:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23FF2CD1297 for ; Thu, 11 Apr 2024 00:58:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B28A76B009E; Wed, 10 Apr 2024 20:58:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AFEA96B009F; Wed, 10 Apr 2024 20:58:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 951D06B00A0; Wed, 10 Apr 2024 20:58:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 727716B009E for ; Wed, 10 Apr 2024 20:58:27 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 36D41C0973 for ; Thu, 11 Apr 2024 00:58:27 +0000 (UTC) X-FDA: 81995440254.28.273BF50 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2101.outbound.protection.outlook.com [40.107.223.101]) by imf24.hostedemail.com (Postfix) with ESMTP id 668D8180016 for ; Thu, 11 Apr 2024 00:58:24 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=EIjRoOjK; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf24.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.101 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797104; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zlTxK9xVNfaIqbmp2oh/rMId/UA/P3vb9bDNNvz/ugQ=; b=VUCVJc3loSJBhqeZoAbL32EAUZiP3EO8UGnrlxf+q6Al4MEcYt7dPC2oLfhGNGFzJK5yWi 21mFQCicBZrGHjHKxGnWN1JwGIz1m8iqvw21E3WbOpTdZm+qBh2TfDvyqtreVx6+VIC3dJ 79cgfaWYddVzzdMF5RfRjzfUyoNyVeg= ARC-Authentication-Results: i=2; imf24.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=EIjRoOjK; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf24.hostedemail.com: domain of apopple@nvidia.com designates 40.107.223.101 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797104; a=rsa-sha256; cv=pass; b=TBUVqIiN5qX/QB9jhcFNcJvW3nDDfu3pkWZaj9GxUmQ91JrVZIv8RoPMpzE/BF731r2eMR 9xG4PW9WYJ9u4oPNrgE6C/IcNIrhoSXNKBorirRtCviqKXv+v7QdNgpfpFyjn5zBIirQWc Mgb7/M0CVzFeWy3QaldOFcIRkq5LFHE= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f0Dupi2Dx49htZsNTInh4wAr5DijEiFi8uSlTRO7NwypQAqZ9s47lo0ksXR+kRylOVtQlZErfkxuwOaz8OqsbZd2ImQXspvxHnjghW9HM55/4uFdQgJfv3b8CZuWGeodhVhbF3Vu0jzrBd9DgenzvwEPnKi5flB67hZnF30NLoVc5SBy1egbbnMU2T5ecg26WhUPZdT6NhKcToFR0Ht2ZlJLmSvUyd0D2XglVs9sBerMyPs67NhV/7LMAGAa1OblJom67BSnkns/sYDmNTO9a9oMBtfkDi+MVW+07hM9zYz7T3A0PwqERNIfbJB6zWLbWSq+wsbJR7susgqMDw8dEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zlTxK9xVNfaIqbmp2oh/rMId/UA/P3vb9bDNNvz/ugQ=; b=FWFzZ5pXBcK1WGThYqNYc7HZggnUnZuhUbVuW8EEN+sXvDxlfSprlo97GiaF7orkSAGrfj/9XIaOwqkjwL4351y64aooASYGJ8BbnrsVRz4EKb/wj3hyE3tI00kCniJMLYIC2Z3bsEYeSzYw5YJvgP5vC3CZZDcS4BnNZFHd6laBu0vNZuZvR7HTQnWq9cGMk8cYaYciIjvpcDgVYOvHKrmfI1ZmuJn1fgdsqToH/IGgPWIS0tejbV9h1lq0IdzovrazwFCPQWOoMnAIK4xc0xKmxVzyW99obET9xjbj1vat2ydNSXVytfhQPhimwcTOWkGLYZq63fLGpoUdbXK2fA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zlTxK9xVNfaIqbmp2oh/rMId/UA/P3vb9bDNNvz/ugQ=; b=EIjRoOjKwZcMrTAZzwOEtHqR18Is3ZEKGy5PlDUGIfQEomd01gsrrOVIDpYGJK96MsBx5uFjXT+CfMyZGs4jAbAt2coMQXyfY/Pi4AX8DH/euS6fk9RUnq3r/R6ilHKS2wbHy3tsUcTGEfBQx7cLuNJ+PHIeFIyPWMAzIcScA40yjuddnQjwe0WgQJJO+86eKXKDr4Byp4dC1pa/8toRoQ94yxlIe9aqrTD8sZ3FeFjbNlMhSexFZ8kOUMQw62P33+DfJDtwrxBb8vm4ZgoGcSdrQbu37dkTJOQH38PSr1qUU/fxloY9Rc3t735NLibCWPUcXOwjigLjfoPinorpNg== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by SN7PR12MB7854.namprd12.prod.outlook.com (2603:10b6:806:32b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:58:19 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:58:19 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 08/10] fs/dax: Properly refcount fs dax pages Date: Thu, 11 Apr 2024 10:57:29 +1000 Message-ID: <5cc5a152d2a03e2702be259c81af2bfe424303cd.1712796818.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SYAPR01CA0036.ausprd01.prod.outlook.com (2603:10c6:1:1::24) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|SN7PR12MB7854:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: O3hh4rS1pV9+d2M217poI7czjUA8Oa/RKBjOwuIfNxf16Y6LgkGMuohYZrT8mB2PGCd44KjV1+RnakzqhjacMmX+FDwJh9GMZ0rjTqYVFcZ57pYicDIzk214kDb6RvErWiQayuewiacC11deBphI0haBWPOYWyEO8qX8o9OPFBWQyvobp2uhXwPB5S+wQbuJrdBYJGv1nfWykHIf01DArPaWBGpbaEZHpYRuZZcaoocCjIgvu96ViJmGWnuN3lRoQEqkf3Qh5LIs6R0YETTnazU24ZOEA22rMOQge3NBvdeDIR0XJM68MDCGK5kgUq59k+RxHP+8okYw64We2L0i6lA+45XuSqo8FNJ3nel+3dij08fNtQt32xWEk+aFhA91/t1oBAYakOdo5CizUZk5ht/Me5Im72N4L3HdGCiDr0UozcA9UhgzkdIisHRON98IInxy/fGhxSXgFeLu64l+h91eDOzgVgV67yK/Gd+i0GFuHTTJh+gLiNGXyjtyGKEYykvGOoqiVGmZilKWlpm0kH27TRNi39tRtQAcg165kROeHq8JVYPzktFe5up5asFXu6aOr8pXPsALpbqQe9HEnGZzUjtnrKjRTOnFKFokDaVAbpg+Vv7HieMHcVjHDX4GFC0VRSr4Bc0t3hrABHZRq7v2MNIy5CnZCdqaoYr63Eo= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Mne1lztuBpmLdofVPpevtgtEX327i8ibPHesiVN31Pr01tGQIHFyrkIV/pGbEtHkyXL8B0JzdOBiqikc7a7/sPWuY2bufc/I9Z9L/P5nXq0x+PPVVQHOLCfMbgQuqwyLbhg4WmBCqamO1HPt0tN1te0lhXp+KYgh2tJV0dhI/TQm9GRfA5skjiG5xmVEl5oqlC/qMDJK+Yu8gqDZw17ecz2Kae6C+5wOItkFxSAq2qu0r/w3YgEiwmLF35D8RqCbNma4IDzvn8rRvFEPU3Q/356poNcqj7PvQhnCHciV2J9GVZ+/l84U8zkHOoxtvYVSdqExhrlDzOhBIhXKS6Qn2pt3bMlitLfpvyBtPqhhHwmOQCXSCm45l+feZ6a8mG8RrMt/L1x3os0inDJEZFoD0o+AOoOb5qBDeUlvhEig40rlflqV2VbWfCuVEt5GR6zyefvH4pgPSIQJCbMUCCR+vWcZY9421jXPj62RfJUEFXVhiZQIsUQ4yMxAHIUFBw7Wkqh0qH0s5dpdDZLfC8Af3CNekgkM8l7WdrvmlvZ0+1crIHk5Yhbvw3yC3nheeV/J+2y3mSnZOevACN6RGdnorlBxFV6YmTpkG8rx+TezWpObvdCF+fZ3mQhu4ktILCoOx7y3u3Il1cf6AaL7gEG2knl2m7zjiz4Bk0LfQUnQoT2HcusKTg1hRGttZBbQ9Q72M+eWBb5WMTni+xZlMBWOp6UHzE253jyevS8DUzPkfqkifI2RSnTYEWrIn7rQlBAMBIwHyGsOfIxN2yhw8cOlEM0azEAEqQXcrm+UwrE14Pl5fkUDx/PX/0iV43VADKOrjLuS1G+fIPygv0s5w0m844keF4rlZZrTm+K32ksc4izSdV+c35XNvHhoksU+otcZL+vQYvz4lNZW93dWrGou94FL4m/ckoxPt5LtJK1eJjQnbgwd6Kxw1baTq43/128Kdc5iDsjskkDBpBdsFRHe04KJKdNu4/7vD5LwksNWc5rz9C88Wu6YAYRmHq484mVG1oqYr7YChru8CrY0CsYETe7MW78e/zVICGKjXkk764O3cnN2n9KqBSP63UcRNpoilirkUi07TQlDncaL7VYBt03BPdj3KYd2DFdywGtSPGBFIayrX50++qoR5wi4oXAtUE2vZ4+AOD4DX49PaZReha1dY/77+jfyPZEEGyHhDB9f+RNv8vQkrLmtQ1H00rvBEO8v6Gg+vnCEblFTd9f++lEviwHvYSHAekcik0BzqIOF5HN3YSHAUqUjRD7BeNXByNBVU36HBJvjJceIhhzr84B0XL3hmcYjW+0y9nSFpAtv9YwghXgGu2xwhlmTrEghvWi+gPD3y8HKbiegjIxdD8+tn0XUfDPmihH5PTfvWQ7hQj46T+T0P+0wNR8gX2XcACUYIJJHh/fwDCDccaD82AZGpdut9CmgMbJbSJJhOl6kWG9Z6sCjXlBgo9E/0oQUFcTXH6mec3+8Zp8sU1X21bXjqkPo3Wp/2/v3M7TYWQtWGnNTYbjNqXAYLG00GAagrv+y6hZ7jjXkXMEB0bRkABzs05KjaHbdR05xpuTywhQ/3f0ZVZ1y8yj6zKxZ475A X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5c657f8f-ec9a-4996-4c43-08dc59c27a7f X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:58:19.6337 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 4sgMILOVGMvjytP82o0Z7Dkl0LJOJNWWS0CtxCYdzK5lbMf7hM1kdC1xkqqMGj4jXApRXh5iRtOoVh6RVuLqcQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7854 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 668D8180016 X-Stat-Signature: a9htj79f3h986caj6oduognbz96h1ks1 X-HE-Tag: 1712797104-676695 X-HE-Meta: U2FsdGVkX19D37ab/5qZOdGKQAgQIz1bAjolgLEppGG/X3PdSlp7nGjAAEAod87a0haZR03X+ssPYWL0KKvuU1Jaci7ykYs4NI6oItMvMqKpx7qP46zTWeq1LeJ3zA9X9/hIZxyNskbrFQvqz4ni7QyLkwMzXd6qyOZ2zTZGUX2MpmxinPkYIZ4wnuf9pMQSzoY583IirihOHTftxJat9GODfG5/jkfEoP3DMkIl4SP2UNr963tCGHa8+0fMz/LiORjhliBuKlkJdjJGlgBQpcrpa6m+DQuw4dp6CRf7GENuq2ZlXZ/Q6gMLps8x6p8DJkqiwcjRELV1bKntae82pBnfcCnC1rqC2eUXMlNCDU9rTtr/zsor6G4mKzuB3N92xgfgr931Qj+oQJGH2YgZgoEYcVq2GPlXxZMMcAMk0dfX2zbtQYQeCkEHM1CvT76Z1htB/mWdm++D6t8xiOZ6AwQ0DMMjBUzja7lGIsoxzIq+mac709F1ukHQwYnGTlQR0TIiNMuRFzx2d4DSWsZGZXpbfXiYbk6YT1/AoXihShIESeu1JXP9L0a+MCceFZIInBYcRd/5UL0HIiGT52/tGq8XmXk+wXi3D/3I8/7hzpVaUfos7gECi7/WkjnH5EWgeWyItXDWZEMYyFSB96ShYpHcIgO/iix35/J23ITbLN2gmqnximX0pewBl17vInTcYGqPZv/C1PIlm1wSPClIhdZ/LFzJT1R3Ic8uffClMgE9P0gkM/5ZM9NE7oWz98DMxkUoepUEZRXQKMZgFtOgZJ67eMEf2j1E9cXiowkaQd0iSTpQ7u2g0Oe/EOBYcFJNM7SeWK5hcUdedPP10w2dwjDOtFIHEcLxsahzYBX0x8Zujyw+fPZ2/6dk02fuBd5NDR/usDLEz1oVp6Nz5tdL+gWt1gMSW08K1gOChGn+c1OI45Mnf+0aKe/YdngbXES2Pn3ylyWgi03RAsZ1DW5 p4+pzjmD m1OVhaakI4GAKG9QJ4JGUFYHRivcPBPXtm8jkwz0Ckcah0RctutaZqXswq/oouflk0CjyTl2GeL21ioAlM4PtUJvkf78A+YGZBw+eFzr+isP3mvovTmm5y+jQrnplY17+b+/5osWO7XcQN97nBDZXWzyyhINM/LEDRKvVzbBCRnLW4hC3RF0ysF1nerq+f7OkIJZvBtzXFBGzk5O5kM6dBM6UIzxGZV2656vN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently fs dax pages are considered free when the refcount drops to one and their refcounts are not increased when mapped via PTEs or decreased when unmapped. This requires special logic in mm paths to detect that these pages should not be properly refcounted, and to detect when the refcount drops to one instead of zero. On the other hand get_user_pages(), etc. will properly refcount fs dax pages by taking a reference and dropping it when the page is unpinned. Tracking this special behaviour requires extra PTE bits (eg. pte_devmap) and introduces rules that are potentially confusing and specific to FS DAX pages. To fix this, and to possibly allow removal of the special PTE bits in future, convert the fs dax page refcounts to be zero based and instead take a reference on the page each time it is mapped as is currently the case for normal pages. This may also allow a future clean-up to remove the pgmap refcounting that is currently done in mm/gup.c. Signed-off-by: Alistair Popple --- drivers/dax/super.c | 2 +- drivers/nvdimm/pmem.c | 9 +--- fs/dax.c | 91 +++++++++++++++++++++++++++++++++--------- fs/fuse/virtio_fs.c | 3 +- include/linux/dax.h | 6 ++- include/linux/huge_mm.h | 1 +- include/linux/mm.h | 34 +--------------- mm/gup.c | 9 +--- mm/huge_memory.c | 80 +++++++++++++++++++++++++++++++++++-- mm/internal.h | 2 +- mm/memory-failure.c | 6 +-- mm/memory.c | 82 ++++++++++++++++++++++++++++++++++---- mm/memremap.c | 24 +---------- mm/mm_init.c | 3 +- mm/swap.c | 2 +- 15 files changed, 251 insertions(+), 103 deletions(-) diff --git a/drivers/dax/super.c b/drivers/dax/super.c index 0da9232..d393cd3 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -256,7 +256,7 @@ EXPORT_SYMBOL_GPL(dax_holder_notify_failure); void arch_wb_cache_pmem(void *addr, size_t size); void dax_flush(struct dax_device *dax_dev, void *addr, size_t size) { - if (unlikely(!dax_write_cache_enabled(dax_dev))) + if (unlikely(dax_dev && !dax_write_cache_enabled(dax_dev))) return; arch_wb_cache_pmem(addr, size); diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index b027e1f..c7cb6b4 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -505,7 +505,7 @@ static int pmem_attach_disk(struct device *dev, pmem->disk = disk; pmem->pgmap.owner = pmem; - pmem->pfn_flags = PFN_DEV; + pmem->pfn_flags = 0; if (is_nd_pfn(dev)) { pmem->pgmap.type = MEMORY_DEVICE_FS_DAX; pmem->pgmap.ops = &fsdax_pagemap_ops; @@ -514,7 +514,7 @@ static int pmem_attach_disk(struct device *dev, pmem->data_offset = le64_to_cpu(pfn_sb->dataoff); pmem->pfn_pad = resource_size(res) - range_len(&pmem->pgmap.range); - pmem->pfn_flags |= PFN_MAP; + blk_queue_flag_set(QUEUE_FLAG_DAX, q); bb_range = pmem->pgmap.range; bb_range.start += pmem->data_offset; } else if (pmem_should_map_pages(dev)) { @@ -524,9 +524,10 @@ static int pmem_attach_disk(struct device *dev, pmem->pgmap.type = MEMORY_DEVICE_FS_DAX; pmem->pgmap.ops = &fsdax_pagemap_ops; addr = devm_memremap_pages(dev, &pmem->pgmap); - pmem->pfn_flags |= PFN_MAP; + blk_queue_flag_set(QUEUE_FLAG_DAX, q); bb_range = pmem->pgmap.range; } else { + pmem->pfn_flags = PFN_DEV; addr = devm_memremap(dev, pmem->phys_addr, pmem->size, ARCH_MEMREMAP_PMEM); bb_range.start = res->start; @@ -545,8 +546,6 @@ static int pmem_attach_disk(struct device *dev, blk_queue_max_hw_sectors(q, UINT_MAX); blk_queue_flag_set(QUEUE_FLAG_NONROT, q); blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q); - if (pmem->pfn_flags & PFN_MAP) - blk_queue_flag_set(QUEUE_FLAG_DAX, q); disk->fops = &pmem_fops; disk->private_data = pmem; diff --git a/fs/dax.c b/fs/dax.c index 17b1c5f..a45793f 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -71,6 +71,11 @@ static unsigned long dax_to_pfn(void *entry) return xa_to_value(entry) >> DAX_SHIFT; } +static struct folio *dax_to_folio(void *entry) +{ + return page_folio(pfn_to_page(dax_to_pfn(entry))); +} + static void *dax_make_entry(pfn_t pfn, unsigned long flags) { return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT)); @@ -318,7 +323,44 @@ static unsigned long dax_end_pfn(void *entry) */ #define for_each_mapped_pfn(entry, pfn) \ for (pfn = dax_to_pfn(entry); \ - pfn < dax_end_pfn(entry); pfn++) + pfn < dax_end_pfn(entry); pfn++) + +static void dax_device_folio_init(struct folio *folio, int order) +{ + int orig_order = folio_order(folio); + int i; + + if (orig_order != order) { + for (i = 0; i < (1UL << orig_order); i++) + ClearPageHead(folio_page(folio, i)); + } + + if (order > 0) { + prep_compound_page(&folio->page, order); + if (order > 1) { + VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); + INIT_LIST_HEAD(&folio->_deferred_list); + } + } +} + +static void dax_associate_new_entry(void *entry, struct address_space *mapping, pgoff_t index) +{ + unsigned long order = dax_entry_order(entry); + struct folio *folio = dax_to_folio(entry); + + if (!dax_entry_size(entry)) + return; + + // We don't hold a reference for the DAX pagecache entry for the page. But we + // need to initialise the folio so we can hand it out. Nothing else should have + // a reference if it's zeroed either. + WARN_ON_ONCE(folio_ref_count(folio)); + WARN_ON_ONCE(folio->mapping); + dax_device_folio_init(folio, order); + folio->mapping = mapping; + folio->index = index; +} static struct page *dax_busy_page(void *entry) { @@ -327,7 +369,7 @@ static struct page *dax_busy_page(void *entry) for_each_mapped_pfn(entry, pfn) { struct page *page = pfn_to_page(pfn); - if (page_ref_count(page) > 1) + if (page_ref_count(page)) return page; } return NULL; @@ -346,10 +388,10 @@ dax_entry_t dax_lock_page(struct page *page) XA_STATE(xas, NULL, 0); void *entry; - /* Ensure page->mapping isn't freed while we look at it */ + /* Ensure page_folio(page)->mapping isn't freed while we look at it */ rcu_read_lock(); for (;;) { - struct address_space *mapping = READ_ONCE(page->mapping); + struct address_space *mapping = READ_ONCE(page_folio(page)->mapping); entry = NULL; if (!mapping || !dax_mapping(mapping)) @@ -368,7 +410,7 @@ dax_entry_t dax_lock_page(struct page *page) xas.xa = &mapping->i_pages; xas_lock_irq(&xas); - if (mapping != page->mapping) { + if (mapping != page_folio(page)->mapping) { xas_unlock_irq(&xas); continue; } @@ -390,7 +432,7 @@ dax_entry_t dax_lock_page(struct page *page) void dax_unlock_page(struct page *page, dax_entry_t cookie) { - struct address_space *mapping = page->mapping; + struct address_space *mapping = page_folio(page)->mapping; XA_STATE(xas, &mapping->i_pages, page->index); if (S_ISCHR(mapping->host->i_mode)) @@ -662,8 +704,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping) } EXPORT_SYMBOL_GPL(dax_layout_busy_page); -static int __dax_invalidate_entry(struct address_space *mapping, - pgoff_t index, bool trunc) +int __dax_invalidate_entry(struct address_space *mapping, + pgoff_t index, bool trunc) { XA_STATE(xas, &mapping->i_pages, index); int ret = 0; @@ -813,6 +855,11 @@ static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf, if (shared || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) { void *old; + if (!shared) { + dax_associate_new_entry(new_entry, mapping, + linear_page_index(vmf->vma, vmf->address)); + } + /* * Only swap our new entry into the page cache if the current * entry is a zero page or an empty entry. If a normal PTE or @@ -1000,9 +1047,7 @@ static int dax_iomap_direct_access(const struct iomap *iomap, loff_t pos, goto out; if (pfn_t_to_pfn(*pfnp) & (PHYS_PFN(size)-1)) goto out; - /* For larger pages we need devmap */ - if (length > 1 && !pfn_t_devmap(*pfnp)) - goto out; + rc = 0; out_check_addr: @@ -1109,7 +1154,7 @@ static vm_fault_t dax_load_hole(struct xa_state *xas, struct vm_fault *vmf, *entry = dax_insert_entry(xas, vmf, iter, *entry, pfn, DAX_ZERO_PAGE); - ret = vmf_insert_mixed(vmf->vma, vaddr, pfn); + ret = dax_insert_pfn(vmf->vma, vaddr, pfn, false); trace_dax_load_hole(inode, vmf, ret); return ret; } @@ -1602,12 +1647,10 @@ static vm_fault_t dax_fault_iter(struct vm_fault *vmf, /* insert PMD pfn */ if (pmd) - return vmf_insert_pfn_pmd(vmf, pfn, write); + return dax_insert_pfn_pmd(vmf, pfn, write); /* insert PTE pfn */ - if (write) - return vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn); - return vmf_insert_mixed(vmf->vma, vmf->address, pfn); + return dax_insert_pfn(vmf->vma, vmf->address, pfn, write); } static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, @@ -1864,10 +1907,10 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) dax_lock_entry(&xas, entry); xas_unlock_irq(&xas); if (order == 0) - ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn); + ret = dax_insert_pfn(vmf->vma, vmf->address, pfn, true); #ifdef CONFIG_FS_DAX_PMD else if (order == PMD_ORDER) - ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE); + ret = dax_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE); #endif else ret = VM_FAULT_FALLBACK; @@ -1984,6 +2027,18 @@ EXPORT_SYMBOL_GPL(dax_remap_file_range_prep); void dax_page_free(struct page *page) { + /* + * Set trunc to true because we want to remove the entry from the DAX + * page-cache. + */ + __dax_invalidate_entry(page->mapping, page->index, true); + + /* + * Make sure we flush any cached data to the page now that it's free. + */ + if (PageDirty(page)) + dax_flush(NULL, page_address(page), 1); + wake_up_var(page); } EXPORT_SYMBOL_GPL(dax_page_free); diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c index 11bfc28..c196cae 100644 --- a/fs/fuse/virtio_fs.c +++ b/fs/fuse/virtio_fs.c @@ -761,8 +761,7 @@ static long virtio_fs_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, if (kaddr) *kaddr = fs->window_kaddr + offset; if (pfn) - *pfn = phys_to_pfn_t(fs->window_phys_addr + offset, - PFN_DEV | PFN_MAP); + *pfn = phys_to_pfn_t(fs->window_phys_addr + offset, 0); return nr_pages > max_nr_pages ? max_nr_pages : nr_pages; } diff --git a/include/linux/dax.h b/include/linux/dax.h index c0c3206..74a40e5 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -217,9 +217,13 @@ static inline int dax_wait_page_idle(struct page *page, void (cb)(struct inode *), struct inode *inode) { + int i = 0; int ret; - ret = ___wait_var_event(page, page_ref_count(page) == 1, + /* + * Wait for the pgmap->ops->page_free callback. + */ + ret = ___wait_var_event(page, !page_ref_count(page) || i++, TASK_INTERRUPTIBLE, 0, 0, cb(inode)); return ret; } diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index fa0350b..bf49efa 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -39,6 +39,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); +vm_fault_t dax_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); enum transparent_hugepage_flag { TRANSPARENT_HUGEPAGE_UNSUPPORTED, diff --git a/include/linux/mm.h b/include/linux/mm.h index bf5d0b1..f10aa62 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1040,6 +1040,8 @@ int vma_is_stack_for_current(struct vm_area_struct *vma); struct mmu_gather; struct inode; +extern void prep_compound_page(struct page *page, unsigned int order); + /* * compound_order() can be called without holding a reference, which means * that niceties like page_folio() don't work. These callers should be @@ -1400,30 +1402,6 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); * back into memory. */ -#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_FS_DAX) -DECLARE_STATIC_KEY_FALSE(devmap_managed_key); - -bool __put_devmap_managed_page_refs(struct page *page, int refs); -static inline bool put_devmap_managed_page_refs(struct page *page, int refs) -{ - if (!static_branch_unlikely(&devmap_managed_key)) - return false; - if (!is_zone_device_page(page)) - return false; - return __put_devmap_managed_page_refs(page, refs); -} -#else /* CONFIG_ZONE_DEVICE && CONFIG_FS_DAX */ -static inline bool put_devmap_managed_page_refs(struct page *page, int refs) -{ - return false; -} -#endif /* CONFIG_ZONE_DEVICE && CONFIG_FS_DAX */ - -static inline bool put_devmap_managed_page(struct page *page) -{ - return put_devmap_managed_page_refs(page, 1); -} - /* 127: arbitrary random number, small enough to assemble well */ #define folio_ref_zero_or_close_to_overflow(folio) \ ((unsigned int) folio_ref_count(folio) + 127u <= 127u) @@ -1535,12 +1513,6 @@ static inline void put_page(struct page *page) { struct folio *folio = page_folio(page); - /* - * For some devmap managed pages we need to catch refcount transition - * from 2 to 1: - */ - if (put_devmap_managed_page(&folio->page)) - return; folio_put(folio); } @@ -3465,6 +3437,8 @@ int vm_map_pages(struct vm_area_struct *vma, struct page **pages, unsigned long num); int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages, unsigned long num); +vm_fault_t dax_insert_pfn(struct vm_area_struct *vma, + unsigned long addr, pfn_t pfn, bool write); vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, diff --git a/mm/gup.c b/mm/gup.c index a9c8a09..6a3141d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -89,8 +89,7 @@ static inline struct folio *try_get_folio(struct page *page, int refs) * belongs to this folio. */ if (unlikely(page_folio(page) != folio)) { - if (!put_devmap_managed_page_refs(&folio->page, refs)) - folio_put_refs(folio, refs); + folio_put_refs(folio, refs); goto retry; } @@ -156,8 +155,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) */ if (unlikely((flags & FOLL_LONGTERM) && !folio_is_longterm_pinnable(folio))) { - if (!put_devmap_managed_page_refs(&folio->page, refs)) - folio_put_refs(folio, refs); + folio_put_refs(folio, refs); return NULL; } @@ -198,8 +196,7 @@ static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) refs *= GUP_PIN_COUNTING_BIAS; } - if (!put_devmap_managed_page_refs(&folio->page, refs)) - folio_put_refs(folio, refs); + folio_put_refs(folio, refs); } /** diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 064fbd9..c657886 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -901,8 +901,6 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) * but we need to be consistent with PTEs and architectures that * can't support a 'special' bit. */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && - !pfn_t_devmap(pfn)); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == (VM_PFNMAP|VM_MIXEDMAP)); BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); @@ -923,6 +921,79 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) } EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd); +static vm_fault_t insert_page_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write) +{ + struct mm_struct *mm = vma->vm_mm; + pmd_t entry; + spinlock_t *ptl; + pgprot_t pgprot = vma->vm_page_prot; + pgtable_t pgtable = NULL; + struct page *page; + + if (addr < vma->vm_start || addr >= vma->vm_end) + return VM_FAULT_SIGBUS; + + if (arch_needs_pgtable_deposit()) { + pgtable = pte_alloc_one(vma->vm_mm); + if (!pgtable) + return VM_FAULT_OOM; + } + + track_pfn_insert(vma, &pgprot, pfn); + + ptl = pmd_lock(mm, pmd); + if (!pmd_none(*pmd)) { + if (write) { + if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) { + WARN_ON_ONCE(!is_huge_zero_pmd(*pmd)); + goto out_unlock; + } + entry = pmd_mkyoung(*pmd); + entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + if (pmdp_set_access_flags(vma, addr, pmd, entry, 1)) + update_mmu_cache_pmd(vma, addr, pmd); + } + + goto out_unlock; + } + + entry = pmd_mkhuge(pfn_t_pmd(pfn, prot)); + if (pfn_t_devmap(pfn)) + entry = pmd_mkdevmap(entry); + if (write) { + entry = pmd_mkyoung(pmd_mkdirty(entry)); + entry = maybe_pmd_mkwrite(entry, vma); + } + + if (pgtable) { + pgtable_trans_huge_deposit(mm, pmd, pgtable); + mm_inc_nr_ptes(mm); + pgtable = NULL; + } + + page = pfn_t_to_page(pfn); + folio_get(page_folio(page)); + folio_add_file_rmap_range(page_folio(page), page, 1, vma, true); + add_mm_counter(mm, mm_counter_file(page), HPAGE_PMD_NR); + set_pmd_at(mm, addr, pmd, entry); + update_mmu_cache_pmd(vma, addr, pmd); + +out_unlock: + spin_unlock(ptl); + if (pgtable) + pte_free(mm, pgtable); + + return VM_FAULT_NOPAGE; +} + +vm_fault_t dax_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) +{ + return insert_page_pmd(vmf->vma, vmf->address & PMD_MASK, vmf->pmd, pfn, + vmf->vma->vm_page_prot, write); +} +EXPORT_SYMBOL_GPL(dax_insert_pfn_pmd); + #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) { @@ -1677,7 +1748,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, tlb->fullmm); arch_check_zapped_pmd(vma, orig_pmd); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); - if (vma_is_special_huge(vma)) { + if (!vma_is_dax(vma) && vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); @@ -2092,8 +2163,9 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, */ if (arch_needs_pgtable_deposit()) zap_deposited_table(mm, pmd); - if (vma_is_special_huge(vma)) + if (!vma_is_dax(vma) && vma_is_special_huge(vma)) { return; + } if (unlikely(is_pmd_migration_entry(old_pmd))) { swp_entry_t entry; diff --git a/mm/internal.h b/mm/internal.h index 30cf724..81597b6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -434,8 +434,6 @@ static inline void prep_compound_tail(struct page *head, int tail_idx) set_page_private(p, 0); } -extern void prep_compound_page(struct page *page, unsigned int order); - extern void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags); extern int user_min_free_kbytes; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4d6e43c..de64958 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -394,18 +394,18 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma, pud = pud_offset(p4d, address); if (!pud_present(*pud)) return 0; - if (pud_devmap(*pud)) + if (pud_trans_huge(*pud)) return PUD_SHIFT; pmd = pmd_offset(pud, address); if (!pmd_present(*pmd)) return 0; - if (pmd_devmap(*pmd)) + if (pmd_trans_huge(*pmd)) return PMD_SHIFT; pte = pte_offset_map(pmd, address); if (!pte) return 0; ptent = ptep_get(pte); - if (pte_present(ptent) && pte_devmap(ptent)) + if (pte_present(ptent)) ret = PAGE_SHIFT; pte_unmap(pte); return ret; diff --git a/mm/memory.c b/mm/memory.c index 52248d4..418b630 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1834,15 +1834,44 @@ static int validate_page_before_insert(struct page *page) } static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte, - unsigned long addr, struct page *page, pgprot_t prot) + unsigned long addr, struct page *page, pgprot_t prot, bool mkwrite) { - if (!pte_none(ptep_get(pte))) + pte_t entry = ptep_get(pte); + + if (!pte_none(entry)) { + if (mkwrite) { + /* + * For read faults on private mappings the PFN passed + * in may not match the PFN we have mapped if the + * mapped PFN is a writeable COW page. In the mkwrite + * case we are creating a writable PTE for a shared + * mapping and we expect the PFNs to match. If they + * don't match, we are likely racing with block + * allocation and mapping invalidation so just skip the + * update. + */ + if (pte_pfn(entry) != page_to_pfn(page)) { + WARN_ON_ONCE(!is_zero_pfn(pte_pfn(entry))); + return -EFAULT; + } + entry = maybe_mkwrite(entry, vma); + entry = pte_mkyoung(entry); + if (ptep_set_access_flags(vma, addr, pte, entry, 1)) + update_mmu_cache(vma, addr, pte); + return 0; + } return -EBUSY; + } + /* Ok, finally just insert the thing.. */ get_page(page); + if (mkwrite) + entry = maybe_mkwrite(mk_pte(page, prot), vma); + else + entry = mk_pte(page, prot); inc_mm_counter(vma->vm_mm, mm_counter_file(page)); page_add_file_rmap(page, vma, false); - set_pte_at(vma->vm_mm, addr, pte, mk_pte(page, prot)); + set_pte_at(vma->vm_mm, addr, pte, entry); return 0; } @@ -1854,7 +1883,7 @@ static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte, * pages reserved for the old functions anyway. */ static int insert_page(struct vm_area_struct *vma, unsigned long addr, - struct page *page, pgprot_t prot) + struct page *page, pgprot_t prot, bool mkwrite) { int retval; pte_t *pte; @@ -1867,7 +1896,7 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr, pte = get_locked_pte(vma->vm_mm, addr, &ptl); if (!pte) goto out; - retval = insert_page_into_pte_locked(vma, pte, addr, page, prot); + retval = insert_page_into_pte_locked(vma, pte, addr, page, prot, mkwrite); pte_unmap_unlock(pte, ptl); out: return retval; @@ -1883,7 +1912,7 @@ static int insert_page_in_batch_locked(struct vm_area_struct *vma, pte_t *pte, err = validate_page_before_insert(page); if (err) return err; - return insert_page_into_pte_locked(vma, pte, addr, page, prot); + return insert_page_into_pte_locked(vma, pte, addr, page, prot, false); } /* insert_pages() amortizes the cost of spinlock operations @@ -2020,7 +2049,7 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr, BUG_ON(vma->vm_flags & VM_PFNMAP); vm_flags_set(vma, VM_MIXEDMAP); } - return insert_page(vma, addr, page, vma->vm_page_prot); + return insert_page(vma, addr, page, vma->vm_page_prot, false); } EXPORT_SYMBOL(vm_insert_page); @@ -2294,7 +2323,7 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, * result in pfn_t_has_page() == false. */ page = pfn_to_page(pfn_t_to_pfn(pfn)); - err = insert_page(vma, addr, page, pgprot); + err = insert_page(vma, addr, page, pgprot, mkwrite); } else { return insert_pfn(vma, addr, pfn, pgprot, mkwrite); } @@ -2307,6 +2336,43 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, return VM_FAULT_NOPAGE; } +vm_fault_t dax_insert_pfn(struct vm_area_struct *vma, + unsigned long addr, pfn_t pfn_t, bool write) +{ + pgprot_t pgprot = vma->vm_page_prot; + unsigned long pfn = pfn_t_to_pfn(pfn_t); + struct page *page = pfn_to_page(pfn); + int err; + + if (addr < vma->vm_start || addr >= vma->vm_end) + return VM_FAULT_SIGBUS; + + track_pfn_insert(vma, &pgprot, pfn_t); + + if (!pfn_modify_allowed(pfn, pgprot)) + return VM_FAULT_SIGBUS; + + /* + * We refcount the page normally so make sure pfn_valid is true. + */ + if (!pfn_t_valid(pfn_t)) + return VM_FAULT_SIGBUS; + + WARN_ON_ONCE(pfn_t_devmap(pfn_t)); + + if (WARN_ON(is_zero_pfn(pfn) && write)) + return VM_FAULT_SIGBUS; + + err = insert_page(vma, addr, page, pgprot, write); + if (err == -ENOMEM) + return VM_FAULT_OOM; + if (err < 0 && err != -EBUSY) + return VM_FAULT_SIGBUS; + + return VM_FAULT_NOPAGE; +} +EXPORT_SYMBOL_GPL(dax_insert_pfn); + vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn) { diff --git a/mm/memremap.c b/mm/memremap.c index 619b059..3aab098 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -505,18 +505,20 @@ void free_zone_device_page(struct page *page) * handled differently or not done at all, so there is no need * to clear page->mapping. */ - page->mapping = NULL; page_dev_pagemap(page)->ops->page_free(page); if (page->pgmap->type == MEMORY_DEVICE_PRIVATE || page->pgmap->type == MEMORY_DEVICE_COHERENT) put_dev_pagemap(page->pgmap); - else if (page->pgmap->type != MEMORY_DEVICE_PCI_P2PDMA) + else if (page->pgmap->type != MEMORY_DEVICE_PCI_P2PDMA && + page->pgmap->type != MEMORY_DEVICE_FS_DAX) /* * Reset the page count to 1 to prepare for handing out the page * again. */ set_page_count(page, 1); + + page->mapping = NULL; } void zone_device_page_init(struct page *page) @@ -530,21 +532,3 @@ void zone_device_page_init(struct page *page) lock_page(page); } EXPORT_SYMBOL_GPL(zone_device_page_init); - -#ifdef CONFIG_FS_DAX -bool __put_devmap_managed_page_refs(struct page *page, int refs) -{ - if (page->pgmap->type != MEMORY_DEVICE_FS_DAX) - return false; - - /* - * fsdax page refcounts are 1-based, rather than 0-based: if - * refcount is 1, then the page is free and the refcount is - * stable because nobody holds a reference on the page. - */ - if (page_ref_sub_return(page, refs) == 1) - wake_up_var(&page->_refcount); - return true; -} -EXPORT_SYMBOL(__put_devmap_managed_page_refs); -#endif /* CONFIG_FS_DAX */ diff --git a/mm/mm_init.c b/mm/mm_init.c index da45abd..2a2864e 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1008,7 +1008,8 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, */ if (pgmap->type == MEMORY_DEVICE_PRIVATE || pgmap->type == MEMORY_DEVICE_COHERENT || - pgmap->type == MEMORY_DEVICE_PCI_P2PDMA) + pgmap->type == MEMORY_DEVICE_PCI_P2PDMA || + pgmap->type == MEMORY_DEVICE_FS_DAX) set_page_count(page, 0); } diff --git a/mm/swap.c b/mm/swap.c index cd8f015..fe76552 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -990,8 +990,6 @@ void release_pages(release_pages_arg arg, int nr) unlock_page_lruvec_irqrestore(lruvec, flags); lruvec = NULL; } - if (put_devmap_managed_page(&folio->page)) - continue; if (folio_put_testzero(folio)) free_zone_device_page(&folio->page); continue; From patchwork Thu Apr 11 00:57:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5420DCD11C2 for ; Thu, 11 Apr 2024 00:58:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE70F6B00A0; Wed, 10 Apr 2024 20:58:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D97CB6B00A1; Wed, 10 Apr 2024 20:58:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C38676B00A2; Wed, 10 Apr 2024 20:58:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A7C906B00A0 for ; Wed, 10 Apr 2024 20:58:33 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 74A621606EA for ; Thu, 11 Apr 2024 00:58:33 +0000 (UTC) X-FDA: 81995440506.06.F278107 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2099.outbound.protection.outlook.com [40.107.94.99]) by imf08.hostedemail.com (Postfix) with ESMTP id DD84C16000A for ; Thu, 11 Apr 2024 00:58:30 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=JtdSSb8N; spf=pass (imf08.hostedemail.com: domain of apopple@nvidia.com designates 40.107.94.99 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797111; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/XTR2wGTQYyMdv9FC/UxmLCHq78nuoZnJtxOmETBa3s=; b=JchKtrG+r1ev9yOwDEfzl/P/N7vGj2GNTx/q2wtVs+9VtAHrAuagM9ig7WHBnVBhfDzAac q/BM7ak+d5HUD2nUcgfXgj2oCs5T8UHmCuyVwEC9YpKYnWDa+Lcqg4plytAm3YNS0n1+8y IGEBaluqQ1Re4m+107NDXSY5KuNAfK0= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797111; a=rsa-sha256; cv=pass; b=URwXfrZZZU2EpymChEYmGZZTmB3nUj/TFWjNdPwk5aCVZn6m4f3PppUGK77408xZfFrUGw QDbUMKNJjEKQ5pqo++Z03ul7H0nqrBTvFQg/CwsuYpCLKmcBGAXu0l4WwIBCuJbqFUh498 JDYS0S/sN4u33TiGmouRRvND9FrJ5Uo= ARC-Authentication-Results: i=2; imf08.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=JtdSSb8N; spf=pass (imf08.hostedemail.com: domain of apopple@nvidia.com designates 40.107.94.99 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TD3yfIjLf/60rzu2DMfzmmOk+9Je+GLid2cktBFXKhX79/W/mfzvwhKv3SFj4kSDXqyj7wJaP4s1iljoWt00ujqpQB0Df+81UsPvCB+1M4N+nczyaJAYHtaiEqxypY5nN+WrqgtVIKwFSdASxiafv9CVIM+V4We3zAtlMV5I2Kv0y5O57Hovbp8J5vrgHYsJzet3O3ZpboDJl/NTGrIBMneEbMxocO4rQl5WAc+Nf+wsTUJBTqBUB2XinFjgDlSploXNx/Iy7sMgqzhszaUoK81X2inV5ORl6N2pATjcmoZlLtCkPykSmv5vcZi2bFllNvrLTTy3w/d72milELsolw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/XTR2wGTQYyMdv9FC/UxmLCHq78nuoZnJtxOmETBa3s=; b=XGgVtU2oC5mNAkKLDEhoBGnKgrsHRtSrRdYEYPM+vEhZiwr39wJyj4uNpuflyCDpA8ZDEf+s/6g/USS9292s+8ZLuzd+q+SRw11EihopKX4THc6urvGo0zoHbMOVW75//ib/L0JPbPZmvAZ/vh41D7CHeuijxZ5FgxGYz4VmunNUF3It6CGOVA9YWG2leys12YBHmir4jiN8xZVppJpiqOTwbDH+rcllERRrkD6aXaJ1xVa4uQP9u8N/LxYR36Nl9Aa3vOH7LDSr01HgY8VIUlk5b7KqJ1FoLzftHGDqy2bIBwomS/d3Z/nVm1sLiEOB9jIP2vnFMJfvt/cqcIZZFQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/XTR2wGTQYyMdv9FC/UxmLCHq78nuoZnJtxOmETBa3s=; b=JtdSSb8NTFOzi51/wzvF2ds75rNXj7APUajn47VmCTGfQADenk/Njks7HiWFcRxr/Q4oLq8lqTN2sLrcMZgwJyShnN++J+faZVYg+hq3TyTkzjW8Ake6f0JMYoCEe/IiajREMaZIXB7TRigVijxcCrPUOx8XOdJqtgnTAkQVWIxboUAUaFaaw6YDh7MAQCo4qgePxIScUSXcPMEDPTFtoO5MgiwwG0SEWRiVnT0bStSetaN42jLb3VY4Fcnj1HMnRHmvAhQ7CgyMPjkdf6f7bB+uPgvD1K96vAfJtMBOrQjADG8ULFo0syjnn5b94u65oY6bBLUAiKA2lV1tlwcVdw== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by DM4PR12MB6182.namprd12.prod.outlook.com (2603:10b6:8:a8::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:58:25 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:58:24 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 09/10] mm/khugepage.c: Warn if trying to scan devmap pmd Date: Thu, 11 Apr 2024 10:57:30 +1000 Message-ID: <68427031c58645ba4b751022bf032ffd6b247427.1712796818.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY5P300CA0056.AUSP300.PROD.OUTLOOK.COM (2603:10c6:10:1fe::20) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|DM4PR12MB6182:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cmonbTBRCOA51WctoFnOuzaMkdzgO5gz6oRLuQYHZ6s1IdGXDxZU9f7gTvrR56NLA6T+EhXzFeogs7rStRTl5U1rVkdM+gTDnQSa28j1l1KVtG2OQQHcHBmfA+27MQKh/JI1Gdty73M7us6KHpLEPJwJmfcX+XUydjCY4zQ1v3wBUXi5eAvU0Z5tS0qFRU69W+aEQNtJIlbppLl6O+kcf4aGld+jGy1or6yuV5XE8MKRvoO1IouGNF22041PIQzBDLlRglEMmP2IJptSIrKeQQEmCieI5C4SxLeqIQcy5JseMW7TM5Nx/1dCEbMsBkw7wcmxFxHorYp7SrHbriy8sQdo3HJwqWFiZa4dyfrnje3zPaKa9NibcGcDp7vZPYJSYzN/jTjC+azoJZle/MhFvstLViItL2TVt2m6Xrf1I+aDwOIdsvAOIibfqz8j3J1icjhrX+ylxGZrSKiSn4vkUnLM/0VwW82f+dsWShO7/k6FKhHs6BtYJuWMg1+8FRNBBnMSw/SRqLQyjIiEFJtMkikno8MVwIh8ZGNmLCTLCP368VaSGP10Yz2WQ4waLpXX2wCmCJpVVRZrdj+GnvFn8qLHiAOh+p6PXRIZFo+rCLqWF8O7E12BqAZ8UHQhnvAN/4zsRtkLNCrlDKK+3RxCoCoalaDXdKiVCby0K7KwDIQ= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: klDNgqQyG0ZaTIWUdE82QNzLPmytAQa7CiOzSKvNOjs8CrPu6YaYfGmJwD1Au1T2HRy+/jFX94YrJj4DZ60OWtKSwWExUtqcTVbqo1Mlq4APj/dkH9N+rnM0XQB8WY4kgGNKfkzo2DYJ4MbWKbrS9pgWrPDPU2pyP4JbZw6ZGq1IBgPSJWfnpbiRlybFcKFHsZkjXZfsx33cy5HESfnXwiSMZSPPYpa9vIoHqLlEEA+QREAPuprBAwh5c5fGYcspE8erPaMlw8gpBZXIM6ns3O6o8ssRekBI3i/8UJ7iPOECMyWd0OiH9irXDQYc2yqLptlo6hejoUfvP8N8a+fOVZHLM9YlejFEctvwkzI3n63RsnjhAjyrxU7KnUrX9x4MMmCet1MSTr/9ylFDs025A30NZSyF1Z0roWO6URqzOrL88X881WvQoIyhTbHT5HLNVKkfVUGcijm5ivdAc5nWZWUHGWyV4eu/n0mcMuqtnF5BJ6bOphrFcqm6spIdys9GqHMkw7UYVhNjbp5/L6rSaFBjajKxYvECR36V/DQdQnFnUJMlrZvHnxylNnEPTtJcDYgGU8bNuxGlYgV5YewS10UZ1avK/44fXj8nQoLbbbE1YjR8lmVlLA3siOGbIHsMh9Tsmvio3udDoYY7/9WM87LGwclJJbmXlGx1aW/kumJHAK+18h1BuAaI1BbE364kiZO/O4V8upQBUPw3ulC0CkH+sU9Zvs+FFXUhFhmuq9r7Fzja3jutlPUPQ/+XJ4teCJp1qj9gz5M1Zym1euxh/0UoDIaNyLZk7A/8TN/5VATNfg9CdReJ6MygpkRviKD/DCmEwE8fkgOmTvwzXqVhAXaqkRg55a1iA4Ro5B9VzxODWhlaw6lrtb2H5I/bVtzDvMr1VI0DlumQPnz4Q+JkaWVTqYTXMLmUmGxz0Jkisjln2tLzR6N1dcmUy/OFn4aBVcuVBdWJ6GgVmbyCzcfvvRVKH0TmLpItVJ6//SjIrczOkD2WaKoWOCwtXT7cENfKzFUI9fAP81aXN3nWxhi/rMFbGArJWZ3/pnbCmWxQvNg/5z36M4BpoJ10Wp8o3j/fHBkQIImAoj3fX+k858iw7QqHlnyirbDPKwf1uVAou/PsdkrvveX7QEWn99NW6pVKm4ASkKPX2nsUUlit5GPt1HHPs2Dkr00C32OCpV62z0LLLEO+q7PmIpn18qNidNNjFWfmwXNlvS3Mj5UzRI/mS9YGTmbLf48rhYlxbtjc0di2cAkpQQzs9axmePofZvWKG0wpAAu74te4DAVnZ/JC/yCcguyxdU4dKZdC34eD206KNhAV7jdFkN5+UxdGFPfMf5Qc2xLsfrOx3dkwhz+jBrGz8S/ketbZ6IKPy0CenOw1Pkhe8x2BVg5sm04t8wgK87BOPrBTvuTw7HB58eIiK+791SHkBdP+EFZEowejh/QIqvWDN///63xrXwGa5aYlZDCaai28rEIpHqErRA0eHlYU3JdMQXpr9Ox3MDpj7YU0tmkl6/zbS6d0HF1XWC8gyfRUaHwKi1bk59OrHjzFxmFZraUy2rrPIGwgUPNmusITunzcrmmHO3wu5is8rbVY X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 58b41f58-dfd9-48b3-f23a-08dc59c27d7e X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:58:24.4818 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: vAk0jOaQCaxsAp4+UYf/BQSLAjAtGXPwKGKRe3qGu/c6cRZhg1HKFmfmfrnFZGVgiXQN6saoo2/HTAgLiku8uw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6182 X-Rspamd-Queue-Id: DD84C16000A X-Rspam-User: X-Stat-Signature: am5wy74i6wa4yixui1gadexmh7irzp7a X-Rspamd-Server: rspam03 X-HE-Tag: 1712797110-608012 X-HE-Meta: U2FsdGVkX1/PD1BeCF7jZlTDX5r/NVElxrC6XZLSH14gvennvjCBxuvbhGDcHOIF2klOggJ0UlWMpPOm16PZhDUPo07X+iY2nsXkmEJgEKcA767XxlODj404b2YN0FY2WdqYn0YssM8PKAnPPW5hnWP+ZEVs+p84W0ZDHpqn/+wfGv9XV0u4IXXSTyyBtMafzT5u6B437XMjvJ3EjyZ9+/xz+Xvy7B1ciVV1zoGoA+kyIbUHFcm7Y6RfXySDxm/IW30hdX/ztKKq683bglP4yLgg85+eUirfqAeXDc0a68T3Ccyc9cts2hC6ZxRteMArdhrJdwbLLXiwIEN2oFyPsM8SiOoOeR+CMItcuIsqluwnIc147eGUjuej7MWMCk4Iizp7TPGIpeMcFz8HgsEc5eGf5sp81dukhIXQhC/EPmyY9V2OFUopxnDH0GERu32uLznl6kufrrHk5tAnDdX9OWoRCBeY9u2P0mL22EF8397V+bUpTPB004vz8Ttj9dig2zyL8zh4V7y1FIzCJ+5LWT9QWtPkv4Vld3c3O3kJXNcRHxcOFf3AyUP0V/2+8KNJUwt1S9MfZ1H0RxcGP6pTRhv/F4FWYMTUykn7xesUMq2eMjAv7W7iLJNKkzNIdIB5oc8lxuHquDUGu96TkmbZ8OmIDt02bj+3fmKdKSiQs7xEL0gtb2yEvi2aY+i5xtYJ57Y/MpRnKtXctqV4r78nLiaxl5D6OghoWGpxpyCj2lqpyG0F2CuqSevRUFMdU2ZcCiowpFjJmr4m/KrlOuId0duQD+fapDTq4uQh5TGiBt+KmVZfPD1sTAgtx+mfogVUhMltsPWAJWITjHPFjFku8D+sAKoXRF+qdu3ZrkhHKp9lq1VbNY2xR7jm7o1SJ2VdBn9sZzvDZqL8TAaVu/JGB3HmpRG4LOY0Hyd3A4RcFFcPMTy+3oK0tZVG268bKAVT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The only user of devmap PTEs is FS DAX, and khugepaged should not be scanning these VMAs. This is checked by calling hugepage_vma_check. Therefore khugepaged should never encounter a devmap PTE. Warn if this occurs. Signed-off-by: Alistair Popple --- Note this is a transitory patch to test the above assumption both at runtime and during review. I will likely remove it as the whole thing gets deleted when pXX_devmap is removed. --- mm/khugepaged.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 88433cc..b10db15 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -955,7 +955,7 @@ static int find_pmd_or_thp_or_none(struct mm_struct *mm, return SCAN_PMD_NULL; if (pmd_trans_huge(pmde)) return SCAN_PMD_MAPPED; - if (pmd_devmap(pmde)) + if (WARN_ON_ONCE(pmd_devmap(pmde))) return SCAN_PMD_NULL; if (pmd_bad(pmde)) return SCAN_PMD_NULL; From patchwork Thu Apr 11 00:57:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13625232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A4AACD11C2 for ; Thu, 11 Apr 2024 00:58:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CBCEE6B00A2; Wed, 10 Apr 2024 20:58:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C5E476B00A3; Wed, 10 Apr 2024 20:58:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4B376B00A4; Wed, 10 Apr 2024 20:58:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 761DB6B00A2 for ; Wed, 10 Apr 2024 20:58:36 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3E88FA148F for ; Thu, 11 Apr 2024 00:58:36 +0000 (UTC) X-FDA: 81995440632.02.4EEDB8B Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2099.outbound.protection.outlook.com [40.107.94.99]) by imf08.hostedemail.com (Postfix) with ESMTP id 9CACB160003 for ; Thu, 11 Apr 2024 00:58:33 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=f3NGy0Yf; spf=pass (imf08.hostedemail.com: domain of apopple@nvidia.com designates 40.107.94.99 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712797113; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zFaYEXsPy8hY8SStqkcxHFdUdZZYvHAgQi+Now5q/kU=; b=AZ9mzeJ3IgZ68xf6e/poZHQ9W+ax7MWa+fM4tEYUG1ewIAgjGI0DEtZhMMtHN64j9kAp53 ZT1Mttuq1dwftOk17/gIL9eAEHOes8wGKaHqKPP6nrVcC2UdruT16UFyPQJeDeIqhF6M4r IT9kK5ODRAlo6anDX4Hn2MDw9BVXzJ0= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1712797113; a=rsa-sha256; cv=pass; b=MDekDMUrSWHf1zQIgyMYkWQX0w9zOm3Eu3ZKsVzAcu75kwvzKYsm8zWVansT+nZMAvizdZ JajHyo8qyRoXsGAN03lnCBx8R4N8mthKU4LVUYbJYTQ3EHnHWOIrxVgERBcqBM5QHanBY3 tfmL5SRc7CdheNSnYkrTSeYZfKo+MB0= ARC-Authentication-Results: i=2; imf08.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=f3NGy0Yf; spf=pass (imf08.hostedemail.com: domain of apopple@nvidia.com designates 40.107.94.99 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QrR4K7/lDaM1OjBJsA5/ScdiVrAIMxYtpy+VFE18RgU/WuCdNT0WBZUanNgqrQ/sNSvrbqA4IMpEHfDeMCyeLxiahJkBSI0nwLLswkSe+3prn88emTjGovMjxUjM7O3zGTEFNqgWdiNDxOlAwEAjq032QahgiPDWpf/VvlNAlRufe1/jDXDGU6dssVrIJ4HUyInufl2uocqLAmCN8j0vSXHuC3zOGFnMFLK5oJ2g/0Ac/oGdyKndAyRBPpbLX7tsVTrGisxmHElOBcgRKrdt48miy6eYvarz3QaQQSwz/nHAjsan2Au6uOHhshL3d6gh+4QVGkpUXS9Bg08HmgVMAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zFaYEXsPy8hY8SStqkcxHFdUdZZYvHAgQi+Now5q/kU=; b=bzl/jOJyaxSRThIkfsblMqVTutiFkpAmT9D2Q+rfq5xAaWTkm9uCLg1S14yVLE9GwVduL0WD41A8YFtFEhfaMupAAuUUgGJ+6Hvw+JCzYzzf7DABJ3288lDx0F6swLWFs5mIef8RdbWEREKF1wneYG/60qSMJY3PcrZD2sbEavQL/2WsISA3jTV3YLFo5wF+oL/nFzNZ8dtLPH/5B64B+gDWL9MfwYAmHofGbWMJxKvNtYS0sUZ0nyFVhoa80Y72MoMXRbRa5n+5BuTs3AosUYvClHvpP3P+SvPhXPLrwdzBJgy9ubcbniS/42jjOqRtoeF+YdVkodrZ3eN9AD+cUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zFaYEXsPy8hY8SStqkcxHFdUdZZYvHAgQi+Now5q/kU=; b=f3NGy0YfT/j9YJ9XmPjNznDXiPlhnuc4bqco3m4G7p3ysYJtEP+YNeINk582P/4mSBKqljTr0yTFmkHRUwmZjRCP2qURhaDeflKfr2eWGxoOiKnSguyqIhllaYOuRATGu/XefxwiLMFsEhV8WEcjhAtWnbWZyx/E/i7ZqXmvbIWwqJ8nki/2OaPaJdA9BUk9GpqpiSotZCK6xEg7Uf/KZEzRhmh3uwq8UWQU5UtlM4wk4LXpn6HokyB6UvgmOfRjhSmqngHfBPe41j+J2Qf5No21tAfzAczCb+2DxKwWAwJqhYZ7FblF3fmaRNZ0HIUgAKRapCgjDF33QPNrwTRWKw== Received: from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by DM4PR12MB6182.namprd12.prod.outlook.com (2603:10b6:8:a8::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Thu, 11 Apr 2024 00:58:29 +0000 Received: from DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80]) by DS0PR12MB7726.namprd12.prod.outlook.com ([fe80::c5de:1187:4532:de80%7]) with mapi id 15.20.7409.042; Thu, 11 Apr 2024 00:58:29 +0000 From: Alistair Popple To: linux-mm@kvack.org Cc: david@fromorbit.com, dan.j.williams@intel.com, jhubbard@nvidia.com, rcampbell@nvidia.com, willy@infradead.org, jgg@nvidia.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, djwong@kernel.org, hch@lst.de, david@redhat.com, ruansy.fnst@fujitsu.com, nvdimm@lists.linux.dev, linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org, jglisse@redhat.com, Alistair Popple Subject: [RFC 10/10] mm: Remove pXX_devmap Date: Thu, 11 Apr 2024 10:57:31 +1000 Message-ID: <93e3772f172918a3c489d803f7580309c3a42fff.1712796818.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: X-ClientProxiedBy: SY5P300CA0047.AUSP300.PROD.OUTLOOK.COM (2603:10c6:10:1fe::14) To DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB7726:EE_|DM4PR12MB6182:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YHDBov3G6cP0m54wWMYCrBqnKdTm9HSo+eT6Gdqh/GX//B+RSpTIb85p05cBq9a7AActLj6CP8Ha09dcqY0CKEHJBW7ma7i8++y1KN8bMsii9QUJ6Kcl0GDyu4+1rUoNhJhUjp7ryNCG2sCEzwJvKFBQ3uBEfTOWB+kaW7Ph8fMkjGHDFSBZpa2v2k5zNYN5VkLSKQqhtB36vrKXJVddcFm94tI24HFy88elettc7QRtE6QTJxYR4YlQAcOzoRgfQwJ0u99RYeh42U1KDQTnMNZ/UKFxkbA8gYslEgJ5lWUTQdDNh22hrMNm3IX7pF2jWvIw+izWLHT+Ay2MCnR5mlGrQ0qo/wKn5OhjMti53p5fySQqfshsDsTd0oAsbKx+FMkhC8O6nfu70OybKDiwC9YO39s7FyrUwmbzYNltHGGGsfgGj07cpuxfCpnzfr+/YUsf8y/j7CH1jUu5l+g0dDBXvIniepLkZewwllzNV3KDUd70JQenD1m18/Gpw6h3e30IYu3C3nWc2WsVodYsnZio+Wxqw92zkksrGaN4blfua/MJiGbCZdg5yBE7Wg5WvjjYSUY8cxNd3MbLyBaqqYtapREtQXZ65OAW61CCt+YqzLmpdbeY1K7o88D9JEAaO3EVP6WsE20HC2IHnxZ4S+DiehowwVRHs/LzO1CkAl0= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(7416005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: pGvqJg1Tk2B9W5lSBGXpdGyOJH1RzcDcc5Gcie4KQ9ybEksG4+WEqlfdq5Ctf1QZHFWomW1Ruy+HlUGVyXw+bAFsmA8hFcznL5APl/wyA7CyRClDuhfd/yaCRzJAsVnZOtRrNSuM7TGjpVEv9H3ef9FB15jNaHaV6+sPoG8XIqrrOlDfmU92ZNw+loZMfQWwm/q2NDWwFY9EhBlX73KVEGHtBASF3xIFyJ33IJrdUHjFYgFlN0CNzXTIR8RRjbfBhpRLQnN+M1rvfGTWbg4BgTAZsfzqofAJjJazhtcwlxR1wqGOGmWwzBQ2A+SI3VVlCahVrWhIRXD9WPpQPTBAuvUpAjbTQYynrOhSlgwSA2Ymk7yCrmpKaowSodMkq+3IHLS1bXUmArT13PllA1iJd/Ble2BA/t16f9mz/OtChzp9Zsye7AW1mNdSCcbAmvBVrrlz1RI+jxvbCTnJvnLzHweogbIPJNKd+LLXehzknF9tF8dH3Rop12Lv5R50DAYbdOymAcOiopVWg+CGKvW0pLjox64DQlfSJnNERwpPhrSriZQW895UB7AJyKkf/M9Ti+IoctxqnOLkiJwAnKIYzHxt66oTph2ZE8rnU9zqKJ4oeKQo4o5eB3BsBu0LApW3XsJntc6T0KzsO0WayGK5z8rNQ2EIWxRZkiQC9isVG1kvgpOOaMh/oXcvPBaJqhIneSt/FQIzj8HKcGHzgRVKFTnS3PdU7C/3Avo0F3mhae4jf7px6T3e0UfntvYBuhGjr3IceCqYuijk7IoO2RkQ86VPtaJxWZc2VpYkPhveqoQxKr/Cs4XLKRKek+hp1LHGAhHsjkuI8CKQpBDIHXnZPy/Xxh+EoNzSbrF8swT9J5EbAgwc5qd8En6k1FJ56H1D71toQYjK/iEDPwXoabtx6WqJpDjzNQr8I/M6GmR7Ntw5/utbXu7YPwlgJdDyAmH6TmPNGGyHdlrFYtrLK/TCwZEfmRHDW5qomGo3JQ1eELjA+J3fj+fVQ5MgsTZrb5TzyuiLJ1Uz81e2vY81E1QB22uo0whTzTSllIRBdvc6DakYAQMRnHL2s70C5olmVRJ+xFBplPDd2jjbVRs87ZxkPjpfEz7PgDw/m6tZb/5A3wZ0Q3fCyxX6Lrlzx5oCJylw/DfXBfllGjqs6JbpbBlZKZo06EB6AwZczjfQZrD/Pcr9EVBG4N0UTx/iRwhR4qQ3pIXJER0QazQ7y4vpSHRB9yUWtljRlOpWZ7dujOKfjsB5hUM3FXtPbsNB73WgkHqO5BKcUi2eCZisfMeKnN1wpjp5F3hCFZTWdstRDCyNOUow4rgO5mqOIbtf2qm6ch+lmaVUXi/dDTk46D0JGrSn35K24HgCSJoa/DhP8oKe/rvaaL6aWw7n7lev/oFLiFkom1ocllJcBdqs+tA7RcyV5/5f32i5iaFVe6K5LLp6RGQ2o27sGS4q/mvV3WYPZE5S7z6OJVI9Q+vdC/T74G3XSa+eWYCQ42PaULI4ycvRLNGDilByuJbaa7bTd8tpWUOQfih61yOOqanxoTczf7jpYYLmqjl01TYovRXN4/+WFUVYrIP9fqYWP1RYQtTsfFjG X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: d11727e0-175d-48ad-7d00-08dc59c28026 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB7726.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2024 00:58:29.1448 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 4pSWWgjkxyQcTWkyydlktZUF73fnRj+gb6wASRILeiLOfOL4tnCVXS5y5YDy2JbbBx3fdtGnos29r/f5IXrifw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6182 X-Rspamd-Queue-Id: 9CACB160003 X-Rspam-User: X-Stat-Signature: 4aiac1b7tz5c6sonfug7nuso5kpchbes X-Rspamd-Server: rspam03 X-HE-Tag: 1712797113-452661 X-HE-Meta: U2FsdGVkX1/Ma8xlowlJ1bLDY/GfEMnz2BeeZpjjN5Ru80eKdiGNmgsk15TZn46OjOBICnf6flg3PhUmJr+7cGYY7FG8DdZkUHcLtve11OVPujMbgjWbdPnzyWuI860c7miWQ1+UgjQ2nSDik3g3eWQQA99Am/Xv3XcOpYbb3mhMNLjRqmyT5K+/Ftthfng9zNNdfCHEvFxn7Jd2+tB5HK129nNlG8mdY4GUeu7WlgeKA+e/wyeIuLwAgFduiF/f+PWZOK/mO2AxzrbW6YNTfvqpcM1i/Y1oVCkpWy8/S86651xQkRvCy7jHhCWEjYM1e8T+NK+YjxCxxMnowgmo2ltheYvCRKbb9ablAwOM0EcN07VWvgb6ulkEmuNORVfiUDFh4YgDuqNPLuMyXP6mVHkFSsZfNi7t1QqaLWuysRxdz1M/NAlw8D/Ov7mKDWc+bDuLXkLthwhxyMJr639O7fNHUP09atFAivt9yTI+IC8Ux1uDvN386klkz0x2GTvDO/35MLnFMulXtHPNECFa1Y4O5zHwJl5XpuOSOiScWGT0FKddwv9EO8cdyn9/ptZPM/UoRfjD1n0YX1iPwLLkFuDoINsjaoQhM2APvOySHzhlaJCtS81mineIn6PURmpcz8t7v3d8rqnK5KmAo0enPQmIsqzm4hRP6pODHOFdovsGsb6bW0d+Fr8tEhkHEF/WPq5K/62uKGgYMMC4UKr2L1V+r+33Q/aZj27MWhildi2R+BynSKulSxe1IdUrrAqbRsam12fXGhk9ybfWBMrBTv4OFxppQJQa2P9ZWfOzu7TL3kMo3jDvy/VM7yLlVLPY81cIzR7mokMLHjx4NoKd2LBST02ORFgXmUWrA6vX+es/fin2wjOFgYUTuYRJbKlKHRC0yB06r9eyad9jLf5ghn5QhTHLvjWJhAXebvWlc7w/5HFAbsB1GKM2Ob9u4rXysj9x5PTanBkDdIL6/ID E85vyL+u x1xflxRobkoy35RU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The devmap PTE special bit was used to detect mappings of FS DAX pages. This tracking was required to ensure the generic mm did not manipulate the page reference counts as FS DAX implemented it's own reference counting scheme. Now that FS DAX pages have their references counted the same way as normal pages this tracking is no longer needed and can be removed. Almost all existing uses of pmd_devmap() are paired with a check of pmd_trans_huge(). As pmd_trans_huge() now returns true for FS DAX pages dropping the check in these cases doesn't change anything. However care needs to be taken because pmd_trans_huge() also checks that a page is not an FS DAX page. This is dealt with either by checking !vma_is_dax() or relying on the fact that the page pointer was obtained from a page list. This is possible because zone device pages cannot appear in any page list due to sharing page->lru with page->pgmap. Signed-off-by: Alistair Popple --- Documentation/mm/arch_pgtable_helpers.rst | 6 +- arch/arm64/include/asm/pgtable.h | 24 +--- arch/powerpc/include/asm/book3s/64/pgtable.h | 42 +------ arch/powerpc/mm/book3s64/hash_pgtable.c | 3 +- arch/powerpc/mm/book3s64/pgtable.c | 8 +- arch/powerpc/mm/book3s64/radix_pgtable.c | 5 +- arch/powerpc/mm/pgtable.c | 2 +- arch/x86/include/asm/pgtable.h | 31 +---- fs/dax.c | 5 +- fs/userfaultfd.c | 2 +- include/linux/huge_mm.h | 10 +- include/linux/mm.h | 7 +- include/linux/pgtable.h | 17 +-- mm/debug_vm_pgtable.c | 51 +------- mm/gup.c | 151 +-------------------- mm/hmm.c | 5 +- mm/huge_memory.c | 100 +------------- mm/khugepaged.c | 2 +- mm/mapping_dirty_helpers.c | 4 +- mm/memory.c | 25 +--- mm/migrate_device.c | 2 +- mm/mprotect.c | 2 +- mm/mremap.c | 5 +- mm/page_vma_mapped.c | 5 +- mm/pgtable-generic.c | 7 +- mm/vmscan.c | 5 +- 26 files changed, 48 insertions(+), 478 deletions(-) diff --git a/Documentation/mm/arch_pgtable_helpers.rst b/Documentation/mm/arch_pgtable_helpers.rst index c82e3ee..ab3238c 100644 --- a/Documentation/mm/arch_pgtable_helpers.rst +++ b/Documentation/mm/arch_pgtable_helpers.rst @@ -32,8 +32,6 @@ PTE Page Table Helpers +---------------------------+--------------------------------------------------+ | pte_protnone | Tests a PROT_NONE PTE | +---------------------------+--------------------------------------------------+ -| pte_devmap | Tests a ZONE_DEVICE mapped PTE | -+---------------------------+--------------------------------------------------+ | pte_soft_dirty | Tests a soft dirty PTE | +---------------------------+--------------------------------------------------+ | pte_swp_soft_dirty | Tests a soft dirty swapped PTE | @@ -108,8 +106,6 @@ PMD Page Table Helpers +---------------------------+--------------------------------------------------+ | pmd_protnone | Tests a PROT_NONE PMD | +---------------------------+--------------------------------------------------+ -| pmd_devmap | Tests a ZONE_DEVICE mapped PMD | -+---------------------------+--------------------------------------------------+ | pmd_soft_dirty | Tests a soft dirty PMD | +---------------------------+--------------------------------------------------+ | pmd_swp_soft_dirty | Tests a soft dirty swapped PMD | @@ -182,8 +178,6 @@ PUD Page Table Helpers +---------------------------+--------------------------------------------------+ | pud_write | Tests a writable PUD | +---------------------------+--------------------------------------------------+ -| pud_devmap | Tests a ZONE_DEVICE mapped PUD | -+---------------------------+--------------------------------------------------+ | pud_mkyoung | Creates a young PUD | +---------------------------+--------------------------------------------------+ | pud_mkold | Creates an old PUD | diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 7f7d9b1..506f78f 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -107,7 +107,6 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) #define pte_user(pte) (!!(pte_val(pte) & PTE_USER)) #define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN)) #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT)) -#define pte_devmap(pte) (!!(pte_val(pte) & PTE_DEVMAP)) #define pte_tagged(pte) ((pte_val(pte) & PTE_ATTRINDX_MASK) == \ PTE_ATTRINDX(MT_NORMAL_TAGGED)) @@ -256,11 +255,6 @@ static inline pmd_t pmd_mkcont(pmd_t pmd) return __pmd(pmd_val(pmd) | PMD_SECT_CONT); } -static inline pte_t pte_mkdevmap(pte_t pte) -{ - return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL)); -} - static inline void set_pte(pte_t *ptep, pte_t pte) { WRITE_ONCE(*ptep, pte); @@ -506,14 +500,6 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd) #define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#define pmd_devmap(pmd) pte_devmap(pmd_pte(pmd)) -#endif -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - return pte_pmd(set_pte_bit(pmd_pte(pmd), __pgprot(PTE_DEVMAP))); -} - #define __pmd_to_phys(pmd) __pte_to_phys(pmd_pte(pmd)) #define __phys_to_pmd_val(phys) __phys_to_pte_val(phys) #define pmd_pfn(pmd) ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT) @@ -847,16 +833,6 @@ static inline int pmdp_set_access_flags(struct vm_area_struct *vma, { return ptep_set_access_flags(vma, address, (pte_t *)pmdp, pmd_pte(entry), dirty); } - -static inline int pud_devmap(pud_t pud) -{ - return 0; -} - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} #endif #ifdef CONFIG_PAGE_TABLE_CHECK diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 5c497c8..51351a0 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -639,19 +639,6 @@ static inline pte_t pte_mkuser(pte_t pte) return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_PRIVILEGED)); } -/* - * This is potentially called with a pmd as the argument, in which case it's not - * safe to check _PAGE_DEVMAP unless we also confirm that _PAGE_PTE is set. - * That's because the bit we use for _PAGE_DEVMAP is not reserved for software - * use in page directory entries (ie. non-ptes). - */ -static inline int pte_devmap(pte_t pte) -{ - u64 mask = cpu_to_be64(_PAGE_DEVMAP | _PAGE_PTE); - - return (pte_raw(pte) & mask) == mask; -} - static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { /* FIXME!! check whether this need to be a conditional */ @@ -1428,35 +1415,6 @@ static inline bool arch_needs_pgtable_deposit(void) } extern void serialize_against_pte_lookup(struct mm_struct *mm); - -static inline pmd_t pmd_mkdevmap(pmd_t pmd) -{ - if (radix_enabled()) - return radix__pmd_mkdevmap(pmd); - return hash__pmd_mkdevmap(pmd); -} - -static inline pud_t pud_mkdevmap(pud_t pud) -{ - if (radix_enabled()) - return radix__pud_mkdevmap(pud); - BUG(); - return pud; -} - -static inline int pmd_devmap(pmd_t pmd) -{ - return pte_devmap(pmd_pte(pmd)); -} - -static inline int pud_devmap(pud_t pud) -{ - return pte_devmap(pud_pte(pud)); -} - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c index 988948d..82d3117 100644 --- a/arch/powerpc/mm/book3s64/hash_pgtable.c +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c @@ -195,7 +195,7 @@ unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!hash__pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!hash__pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(mm, pmdp)); #endif @@ -227,7 +227,6 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres VM_BUG_ON(address & ~HPAGE_PMD_MASK); VM_BUG_ON(pmd_trans_huge(*pmdp)); - VM_BUG_ON(pmd_devmap(*pmdp)); pmd = *pmdp; pmd_clear(pmdp); diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 8f8a62d..8341957 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -50,7 +50,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, { int changed; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(vma->vm_mm, pmdp)); #endif changed = !pmd_same(*(pmdp), entry); @@ -70,7 +70,6 @@ int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address, { int changed; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pud_devmap(*pudp)); assert_spin_locked(pud_lockptr(vma->vm_mm, pudp)); #endif changed = !pud_same(*(pudp), entry); @@ -181,7 +180,7 @@ pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma, pmd_t pmd; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); VM_BUG_ON((pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && - !pmd_devmap(*pmdp)) || !pmd_present(*pmdp)); + || !pmd_present(*pmdp)); pmd = pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp); /* * if it not a fullmm flush, then we can possibly end up converting @@ -199,8 +198,7 @@ pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma, pud_t pud; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); - VM_BUG_ON((pud_present(*pudp) && !pud_devmap(*pudp)) || - !pud_present(*pudp)); + VM_BUG_ON(!pud_present(*pudp)); pud = pudp_huge_get_and_clear(vma->vm_mm, addr, pudp); /* * if it not a fullmm flush, then we can possibly end up converting diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index c6a4ac7..b50f999 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1362,7 +1362,7 @@ unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned long add unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!radix__pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); + WARN_ON(!radix__pmd_trans_huge(*pmdp)); assert_spin_locked(pmd_lockptr(mm, pmdp)); #endif @@ -1379,7 +1379,7 @@ unsigned long radix__pud_hugepage_update(struct mm_struct *mm, unsigned long add unsigned long old; #ifdef CONFIG_DEBUG_VM - WARN_ON(!pud_devmap(*pudp)); + WARN_ON(!pud_trans_huge(*pudp)); assert_spin_locked(pud_lockptr(mm, pudp)); #endif @@ -1397,7 +1397,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre VM_BUG_ON(address & ~HPAGE_PMD_MASK); VM_BUG_ON(radix__pmd_trans_huge(*pmdp)); - VM_BUG_ON(pmd_devmap(*pmdp)); /* * khugepaged calls this for normal pmd */ diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 4d69bfb..d1f98e0 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -467,7 +467,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea, return NULL; #endif - if (pmd_trans_huge(pmd) || pmd_devmap(pmd)) { + if (pmd_trans_huge(pmd)) { if (is_thp) *is_thp = true; ret_pte = (pte_t *)pmdp; diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index e02b179..9257da3 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -267,7 +267,6 @@ static inline int pmd_large(pmd_t pte) } #ifdef CONFIG_TRANSPARENT_HUGEPAGE -/* NOTE: when predicate huge page, consider also pmd_devmap, or use pmd_large */ static inline int pmd_trans_huge(pmd_t pmd) { return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE; @@ -286,29 +285,6 @@ static inline int has_transparent_hugepage(void) return boot_cpu_has(X86_FEATURE_PSE); } -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pmd_devmap(pmd_t pmd) -{ - return !!(pmd_val(pmd) & _PAGE_DEVMAP); -} - -#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static inline int pud_devmap(pud_t pud) -{ - return !!(pud_val(pud) & _PAGE_DEVMAP); -} -#else -static inline int pud_devmap(pud_t pud) -{ - return 0; -} -#endif - -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} -#endif #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline pte_t pte_set_flags(pte_t pte, pteval_t set) @@ -968,13 +944,6 @@ static inline int pte_present(pte_t a) return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE); } -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pte_devmap(pte_t a) -{ - return (pte_flags(a) & _PAGE_DEVMAP) == _PAGE_DEVMAP; -} -#endif - #define pte_accessible pte_accessible static inline bool pte_accessible(struct mm_struct *mm, pte_t a) { diff --git a/fs/dax.c b/fs/dax.c index a45793f..b83c668 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1694,7 +1694,7 @@ static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, * the PTE we need to set up. If so just return and the fault will be * retried. */ - if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) { + if (pmd_trans_huge(*vmf->pmd)) { ret = VM_FAULT_NOPAGE; goto unlock_entry; } @@ -1815,8 +1815,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, * the PMD we need to set up. If so just return and the fault will be * retried. */ - if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) && - !pmd_devmap(*vmf->pmd)) { + if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd)) { ret = 0; goto unlock_entry; } diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 56eaae9..33f9448 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -353,7 +353,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, goto out; ret = false; - if (!pmd_present(_pmd) || pmd_devmap(_pmd)) + if (!pmd_present(_pmd) || vma_is_dax(vmf->vma)) goto out; if (pmd_trans_huge(_pmd)) { diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index bf49efa..81b5b49 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -156,8 +156,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, #define split_huge_pmd(__vma, __pmd, __address) \ do { \ pmd_t *____pmd = (__pmd); \ - if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd) \ - || pmd_devmap(*____pmd)) \ + if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd)) \ __split_huge_pmd(__vma, __pmd, __address, \ false, NULL); \ } while (0) @@ -172,8 +171,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, #define split_huge_pud(__vma, __pud, __address) \ do { \ pud_t *____pud = (__pud); \ - if (pud_trans_huge(*____pud) \ - || pud_devmap(*____pud)) \ + if (pud_trans_huge(*____pud)) \ __split_huge_pud(__vma, __pud, __address); \ } while (0) @@ -196,7 +194,7 @@ static inline int is_swap_pmd(pmd_t pmd) static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) return __pmd_trans_huge_lock(pmd, vma); else return NULL; @@ -204,7 +202,7 @@ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { - if (pud_trans_huge(*pud) || pud_devmap(*pud)) + if (pud_trans_huge(*pud)) return __pud_trans_huge_lock(pud, vma); else return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index f10aa62..d299b42 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2620,13 +2620,6 @@ static inline pte_t pte_mkspecial(pte_t pte) } #endif -#ifndef CONFIG_ARCH_HAS_PTE_DEVMAP -static inline int pte_devmap(pte_t pte) -{ - return 0; -} -#endif - extern pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, spinlock_t **ptl); static inline pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index af7639c..ff7ca9d 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1363,21 +1363,6 @@ static inline int pud_write(pud_t pud) } #endif /* pud_write */ -#if !defined(CONFIG_ARCH_HAS_PTE_DEVMAP) || !defined(CONFIG_TRANSPARENT_HUGEPAGE) -static inline int pmd_devmap(pmd_t pmd) -{ - return 0; -} -static inline int pud_devmap(pud_t pud) -{ - return 0; -} -static inline int pgd_devmap(pgd_t pgd) -{ - return 0; -} -#endif - #if !defined(CONFIG_TRANSPARENT_HUGEPAGE) || \ !defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) static inline int pud_trans_huge(pud_t pud) @@ -1392,7 +1377,7 @@ static inline int pud_trans_unstable(pud_t *pud) defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) pud_t pudval = READ_ONCE(*pud); - if (pud_none(pudval) || pud_trans_huge(pudval) || pud_devmap(pudval)) + if (pud_none(pudval) || pud_trans_huge(pudval)) return 1; if (unlikely(pud_bad(pudval))) { pud_clear_bad(pud); diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 48e329e..edb9fcb 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -705,53 +705,6 @@ static void __init pmd_protnone_tests(struct pgtable_debug_args *args) static void __init pmd_protnone_tests(struct pgtable_debug_args *args) { } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP -static void __init pte_devmap_tests(struct pgtable_debug_args *args) -{ - pte_t pte = pfn_pte(args->fixed_pte_pfn, args->page_prot); - - pr_debug("Validating PTE devmap\n"); - WARN_ON(!pte_devmap(pte_mkdevmap(pte))); -} - -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -static void __init pmd_devmap_tests(struct pgtable_debug_args *args) -{ - pmd_t pmd; - - if (!has_transparent_hugepage()) - return; - - pr_debug("Validating PMD devmap\n"); - pmd = pfn_pmd(args->fixed_pmd_pfn, args->page_prot); - WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd))); -} - -#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static void __init pud_devmap_tests(struct pgtable_debug_args *args) -{ - pud_t pud; - - if (!has_transparent_pud_hugepage()) - return; - - pr_debug("Validating PUD devmap\n"); - pud = pfn_pud(args->fixed_pud_pfn, args->page_prot); - WARN_ON(!pud_devmap(pud_mkdevmap(pud))); -} -#else /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -static void __init pud_devmap_tests(struct pgtable_debug_args *args) { } -#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -#else /* CONFIG_TRANSPARENT_HUGEPAGE */ -static void __init pmd_devmap_tests(struct pgtable_debug_args *args) { } -static void __init pud_devmap_tests(struct pgtable_debug_args *args) { } -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -#else -static void __init pte_devmap_tests(struct pgtable_debug_args *args) { } -static void __init pmd_devmap_tests(struct pgtable_debug_args *args) { } -static void __init pud_devmap_tests(struct pgtable_debug_args *args) { } -#endif /* CONFIG_ARCH_HAS_PTE_DEVMAP */ - static void __init pte_soft_dirty_tests(struct pgtable_debug_args *args) { pte_t pte = pfn_pte(args->fixed_pte_pfn, args->page_prot); @@ -1352,10 +1305,6 @@ static int __init debug_vm_pgtable(void) pte_protnone_tests(&args); pmd_protnone_tests(&args); - pte_devmap_tests(&args); - pmd_devmap_tests(&args); - pud_devmap_tests(&args); - pte_soft_dirty_tests(&args); pmd_soft_dirty_tests(&args); pte_swap_soft_dirty_tests(&args); diff --git a/mm/gup.c b/mm/gup.c index 6a3141d..8c3c7d3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -600,8 +600,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, page = vm_normal_page(vma, address, pte); /* - * We only care about anon pages in can_follow_write_pte() and don't - * have to worry about pte_devmap() because they are never anon. + * We only care about anon pages in can_follow_write_pte(). */ if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, page, vma, flags)) { @@ -609,18 +608,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, goto out; } - if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) { - /* - * Only return device mapping pages in the FOLL_GET or FOLL_PIN - * case since they are only valid while holding the pgmap - * reference. - */ - *pgmap = get_dev_pagemap(pte_pfn(pte), *pgmap); - if (*pgmap) - page = pte_page(pte); - else - goto no_page; - } else if (unlikely(!page)) { + if (unlikely(!page)) { if (flags & FOLL_DUMP) { /* Avoid special (like zero) pages in core dumps */ page = ERR_PTR(-EFAULT); @@ -701,13 +689,6 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags); if (!pmd_present(pmdval)) return no_page_table(vma, flags); - if (pmd_devmap(pmdval)) { - ptl = pmd_lock(mm, pmd); - page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); - spin_unlock(ptl); - if (page) - return page; - } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); @@ -742,20 +723,10 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, struct follow_page_context *ctx) { pud_t *pud; - spinlock_t *ptl; - struct page *page; - struct mm_struct *mm = vma->vm_mm; pud = pud_offset(p4dp, address); if (pud_none(*pud)) return no_page_table(vma, flags); - if (pud_devmap(*pud)) { - ptl = pud_lock(mm, pud); - page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); - spin_unlock(ptl); - if (page) - return page; - } if (unlikely(pud_bad(*pud))) return no_page_table(vma, flags); @@ -2554,7 +2525,7 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, struct page **pages, int *nr) { struct dev_pagemap *pgmap = NULL; - int nr_start = *nr, ret = 0; + int ret = 0; pte_t *ptep, *ptem; ptem = ptep = pte_offset_map(&pmd, addr); @@ -2578,16 +2549,7 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, if (!pte_access_permitted(pte, flags & FOLL_WRITE)) goto pte_unmap; - if (pte_devmap(pte)) { - if (unlikely(flags & FOLL_LONGTERM)) - goto pte_unmap; - - pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); - if (unlikely(!pgmap)) { - undo_dev_pagemap(nr, nr_start, flags, pages); - goto pte_unmap; - } - } else if (pte_special(pte)) + if (pte_special(pte)) goto pte_unmap; VM_BUG_ON(!pfn_valid(pte_pfn(pte))); @@ -2663,90 +2625,6 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, } #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ -#if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) -static int __gup_device_huge(unsigned long pfn, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - int nr_start = *nr; - struct dev_pagemap *pgmap = NULL; - - do { - struct page *page = pfn_to_page(pfn); - - pgmap = get_dev_pagemap(pfn, pgmap); - if (unlikely(!pgmap)) { - undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - - SetPageReferenced(page); - pages[*nr] = page; - if (unlikely(try_grab_page(page, flags))) { - undo_dev_pagemap(nr, nr_start, flags, pages); - break; - } - (*nr)++; - pfn++; - } while (addr += PAGE_SIZE, addr != end); - - put_dev_pagemap(pgmap); - return addr == end; -} - -static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - unsigned long fault_pfn; - int nr_start = *nr; - - fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) - return 0; - - if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { - undo_dev_pagemap(nr, nr_start, flags, pages); - return 0; - } - return 1; -} - -static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - unsigned long fault_pfn; - int nr_start = *nr; - - fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) - return 0; - - if (unlikely(pud_val(orig) != pud_val(*pudp))) { - undo_dev_pagemap(nr, nr_start, flags, pages); - return 0; - } - return 1; -} -#else -static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - BUILD_BUG(); - return 0; -} - -static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - BUILD_BUG(); - return 0; -} -#endif - static int record_subpages(struct page *page, unsigned long addr, unsigned long end, struct page **pages) { @@ -2852,13 +2730,6 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, if (!pmd_access_permitted(orig, flags & FOLL_WRITE)) return 0; - if (pmd_devmap(orig)) { - if (unlikely(flags & FOLL_LONGTERM)) - return 0; - return __gup_device_huge_pmd(orig, pmdp, addr, end, flags, - pages, nr); - } - page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); @@ -2896,13 +2767,6 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, if (!pud_access_permitted(orig, flags & FOLL_WRITE)) return 0; - if (pud_devmap(orig)) { - if (unlikely(flags & FOLL_LONGTERM)) - return 0; - return __gup_device_huge_pud(orig, pudp, addr, end, flags, - pages, nr); - } - page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); @@ -2941,8 +2805,6 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, if (!pgd_access_permitted(orig, flags & FOLL_WRITE)) return 0; - BUILD_BUG_ON(pgd_devmap(orig)); - page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); @@ -2984,8 +2846,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo if (!pmd_present(pmd)) return 0; - if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) || - pmd_devmap(pmd))) { + if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd))) { /* See gup_pte_range() */ if (pmd_protnone(pmd)) return 0; @@ -3022,7 +2883,7 @@ static int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, unsigned lo next = pud_addr_end(addr, end); if (unlikely(!pud_present(pud))) return 0; - if (unlikely(pud_huge(pud) || pud_devmap(pud))) { + if (unlikely(pud_huge(pud))) { if (!gup_huge_pud(pud, pudp, addr, next, flags, pages, nr)) return 0; diff --git a/mm/hmm.c b/mm/hmm.c index a665a3c..5ad89f9 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -298,7 +298,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, * fall through and treat it like a normal page. */ if (!vm_normal_page(walk->vma, addr, pte) && - !pte_devmap(pte) && !is_zero_pfn(pte_pfn(pte))) { if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) { pte_unmap(ptep); @@ -351,7 +350,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); } - if (pmd_devmap(pmd) || pmd_trans_huge(pmd)) { + if (pmd_trans_huge(pmd)) { /* * No need to take pmd_lock here, even if some other thread * is splitting the huge pmd we will get that event through @@ -362,7 +361,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, * values. */ pmd = pmdp_get_lockless(pmdp); - if (!pmd_devmap(pmd) && !pmd_trans_huge(pmd)) + if (!pmd_trans_huge(pmd)) goto again; return hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c657886..265506a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1090,46 +1090,6 @@ static void touch_pmd(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pmd(vma, addr, pmd); } -struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pmd_pfn(*pmd); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pmd_lockptr(mm, pmd)); - - if (flags & FOLL_WRITE && !pmd_write(*pmd)) - return NULL; - - if (pmd_present(*pmd) && pmd_devmap(*pmd)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PMD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - ret = try_grab_page(page, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) @@ -1245,49 +1205,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pud(vma, addr, pud); } -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pud_pfn(*pud); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pud_lockptr(mm, pud)); - - if (flags & FOLL_WRITE && !pud_write(*pud)) - return NULL; - - if (pud_present(*pud) && pud_devmap(*pud)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pud, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - * - * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here: - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - - ret = try_grab_page(page, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, unsigned long addr, struct vm_area_struct *vma) @@ -1302,7 +1219,7 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, ret = -EAGAIN; pud = *src_pud; - if (unlikely(!pud_trans_huge(pud) && !pud_devmap(pud))) + if (unlikely(!pud_trans_huge(pud))) goto out_unlock; /* @@ -2013,8 +1930,7 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { spinlock_t *ptl; ptl = pmd_lock(vma->vm_mm, pmd); - if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || - pmd_devmap(*pmd))) + if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd))) return ptl; spin_unlock(ptl); return NULL; @@ -2031,7 +1947,7 @@ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) spinlock_t *ptl; ptl = pud_lock(vma->vm_mm, pud); - if (likely(pud_trans_huge(*pud) || pud_devmap(*pud))) + if (likely(pud_trans_huge(*pud))) return ptl; spin_unlock(ptl); return NULL; @@ -2065,7 +1981,7 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, VM_BUG_ON(haddr & ~HPAGE_PUD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PUD_SIZE, vma); - VM_BUG_ON(!pud_trans_huge(*pud) && !pud_devmap(*pud)); + VM_BUG_ON(!pud_trans_huge(*pud)); count_vm_event(THP_SPLIT_PUD); @@ -2083,7 +1999,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, (address & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl = pud_lock(vma->vm_mm, pud); - if (unlikely(!pud_trans_huge(*pud) && !pud_devmap(*pud))) + if (unlikely(!pud_trans_huge(*pud))) goto out; __split_huge_pud_locked(vma, pud, range.start); @@ -2150,8 +2066,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); - VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd) - && !pmd_devmap(*pmd)); + VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)); count_vm_event(THP_SPLIT_PMD); @@ -2354,8 +2269,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, VM_BUG_ON(freeze && !folio); VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || - is_pmd_migration_entry(*pmd)) { + if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd)) { /* * It's safe to call pmd_page when folio is set because it's * guaranteed that pmd is present. diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b10db15..7254ad4 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -955,8 +955,6 @@ static int find_pmd_or_thp_or_none(struct mm_struct *mm, return SCAN_PMD_NULL; if (pmd_trans_huge(pmde)) return SCAN_PMD_MAPPED; - if (WARN_ON_ONCE(pmd_devmap(pmde))) - return SCAN_PMD_NULL; if (pmd_bad(pmde)) return SCAN_PMD_NULL; return SCAN_SUCCEED; diff --git a/mm/mapping_dirty_helpers.c b/mm/mapping_dirty_helpers.c index 2f8829b..208b428 100644 --- a/mm/mapping_dirty_helpers.c +++ b/mm/mapping_dirty_helpers.c @@ -129,7 +129,7 @@ static int wp_clean_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long end, pmd_t pmdval = pmdp_get_lockless(pmd); /* Do not split a huge pmd, present or migrated */ - if (pmd_trans_huge(pmdval) || pmd_devmap(pmdval)) { + if (pmd_trans_huge(pmdval)) { WARN_ON(pmd_write(pmdval) || pmd_dirty(pmdval)); walk->action = ACTION_CONTINUE; } @@ -152,7 +152,7 @@ static int wp_clean_pud_entry(pud_t *pud, unsigned long addr, unsigned long end, pud_t pudval = READ_ONCE(*pud); /* Do not split a huge pud */ - if (pud_trans_huge(pudval) || pud_devmap(pudval)) { + if (pud_trans_huge(pudval)) { WARN_ON(pud_write(pudval) || pud_dirty(pudval)); walk->action = ACTION_CONTINUE; } diff --git a/mm/memory.c b/mm/memory.c index 418b630..6b0a2d1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -592,16 +592,6 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, return NULL; if (is_zero_pfn(pfn)) return NULL; - if (pte_devmap(pte)) - /* - * NOTE: New users of ZONE_DEVICE will not set pte_devmap() - * and will have refcounts incremented on their struct pages - * when they are inserted into PTEs, thus they are safe to - * return here. Legacy ZONE_DEVICE pages that set pte_devmap() - * do not have refcounts. Example of legacy ZONE_DEVICE is - * MEMORY_DEVICE_FS_DAX type in pmem or virtio_fs drivers. - */ - return NULL; print_bad_pte(vma, addr, pte, NULL); return NULL; @@ -677,8 +667,6 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, } } - if (pmd_devmap(pmd)) - return NULL; if (is_huge_zero_pmd(pmd)) return NULL; if (unlikely(pfn > highest_memmap_pfn)) @@ -1150,8 +1138,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pmd = pmd_offset(src_pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd) - || pmd_devmap(*src_pmd)) { + if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, @@ -1187,7 +1174,7 @@ copy_pud_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pud = pud_offset(src_p4d, addr); do { next = pud_addr_end(addr, end); - if (pud_trans_huge(*src_pud) || pud_devmap(*src_pud)) { + if (pud_trans_huge(*src_pud)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PUD_SIZE, src_vma); @@ -1547,7 +1534,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) { if (next - addr != HPAGE_PMD_SIZE) __split_huge_pmd(vma, pmd, addr, false, NULL); else if (zap_huge_pmd(tlb, vma, pmd, addr)) { @@ -1589,7 +1576,7 @@ static inline unsigned long zap_pud_range(struct mmu_gather *tlb, pud = pud_offset(p4d, addr); do { next = pud_addr_end(addr, end); - if (pud_trans_huge(*pud) || pud_devmap(*pud)) { + if (pud_trans_huge(*pud)) { if (next - addr != HPAGE_PUD_SIZE) { mmap_assert_locked(tlb->mm); split_huge_pud(vma, pud, addr); @@ -5129,7 +5116,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, pud_t orig_pud = *vmf.pud; barrier(); - if (pud_trans_huge(orig_pud) || pud_devmap(orig_pud)) { + if (pud_trans_huge(orig_pud)) { /* * TODO once we support anonymous PUDs: NUMA case and @@ -5169,7 +5156,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, pmd_migration_entry_wait(mm, vmf.pmd); return 0; } - if (pmd_trans_huge(vmf.orig_pmd) || pmd_devmap(vmf.orig_pmd)) { + if (pmd_trans_huge(vmf.orig_pmd)) { if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) return do_huge_pmd_numa_page(&vmf); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 1e1c82f..a3659c0 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -590,7 +590,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, pmdp = pmd_alloc(mm, pudp, addr); if (!pmdp) goto abort; - if (pmd_trans_huge(*pmdp) || pmd_devmap(*pmdp)) + if (pmd_trans_huge(*pmdp)) goto abort; if (pte_alloc(mm, pmdp)) goto abort; diff --git a/mm/mprotect.c b/mm/mprotect.c index b94fbb4..a83e9f4 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -387,7 +387,7 @@ static inline long change_pmd_range(struct mmu_gather *tlb, } _pmd = pmdp_get_lockless(pmd); - if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd) || pmd_devmap(_pmd)) { + if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd)) { if ((next - addr != HPAGE_PMD_SIZE) || pgtable_split_needed(vma, cp_flags)) { __split_huge_pmd(vma, pmd, addr, false, NULL); diff --git a/mm/mremap.c b/mm/mremap.c index 382e81c..78ed214 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -527,7 +527,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr); if (!new_pud) break; - if (pud_trans_huge(*old_pud) || pud_devmap(*old_pud)) { + if (pud_trans_huge(*old_pud)) { if (extent == HPAGE_PUD_SIZE) { move_pgt_entry(HPAGE_PUD, vma, old_addr, new_addr, old_pud, new_pud, need_rmap_locks); @@ -549,8 +549,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (!new_pmd) break; again: - if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || - pmd_devmap(*old_pmd)) { + if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd)) { if (extent == HPAGE_PMD_SIZE && move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd, new_pmd, need_rmap_locks)) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index e0b368e..c519dbf 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -235,8 +235,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = pmdp_get_lockless(pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde) || - (pmd_present(pmde) && pmd_devmap(pmde))) { + if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; if (!pmd_present(pmde)) { @@ -251,7 +250,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); return true; } - if (likely(pmd_trans_huge(pmde) || pmd_devmap(pmde))) { + if (likely(pmd_trans_huge(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); if (!check_pmd(pmd_pfn(pmde), pvmw)) diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 4fcd959..ab24a0c 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -139,8 +139,7 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, { pmd_t pmd; VM_BUG_ON(address & ~HPAGE_PMD_MASK); - VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && - !pmd_devmap(*pmdp)); + VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp)); pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return pmd; @@ -153,7 +152,7 @@ pud_t pudp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, pud_t pud; VM_BUG_ON(address & ~HPAGE_PUD_MASK); - VM_BUG_ON(!pud_trans_huge(*pudp) && !pud_devmap(*pudp)); + VM_BUG_ON(!pud_trans_huge(*pudp)); pud = pudp_huge_get_and_clear(vma->vm_mm, address, pudp); flush_pud_tlb_range(vma, address, address + HPAGE_PUD_SIZE); return pud; @@ -291,7 +290,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) *pmdvalp = pmdval; if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval))) goto nomap; - if (unlikely(pmd_trans_huge(pmdval) || pmd_devmap(pmdval))) + if (unlikely(pmd_trans_huge(pmdval))) goto nomap; if (unlikely(pmd_bad(pmdval))) { pmd_clear_bad(pmd); diff --git a/mm/vmscan.c b/mm/vmscan.c index 6f13394..09064c2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3940,7 +3940,7 @@ static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned if (!pte_present(pte) || is_zero_pfn(pfn)) return -1; - if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) + if (WARN_ON_ONCE(pte_special(pte))) return -1; if (WARN_ON_ONCE(!pfn_valid(pfn))) @@ -3959,9 +3959,6 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned if (!pmd_present(pmd) || is_huge_zero_pmd(pmd)) return -1; - if (WARN_ON_ONCE(pmd_devmap(pmd))) - return -1; - if (WARN_ON_ONCE(!pfn_valid(pfn))) return -1;