From patchwork Fri Nov 1 15:03:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13859515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE372E6F066 for ; Fri, 1 Nov 2024 15:04:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D8176B0099; Fri, 1 Nov 2024 11:04:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 25B1D6B009A; Fri, 1 Nov 2024 11:04:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AC016B009B; Fri, 1 Nov 2024 11:04:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CB97E6B0099 for ; Fri, 1 Nov 2024 11:04:36 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 781D11C5463 for ; Fri, 1 Nov 2024 15:04:36 +0000 (UTC) X-FDA: 82737845820.20.4C76EE2 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2047.outbound.protection.outlook.com [40.107.94.47]) by imf26.hostedemail.com (Postfix) with ESMTP id 24A56140027 for ; Fri, 1 Nov 2024 15:04:11 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=l4OYe1+0; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf26.hostedemail.com: domain of ziy@nvidia.com designates 40.107.94.47 as permitted sender) smtp.mailfrom=ziy@nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1730473417; a=rsa-sha256; cv=pass; b=e0/Cr4N0ZXeZ88oWqdbeAS2HKv3BUSEZwlUuiEYhd2nJavl003thPuZF2IhEB+a6lhsF+B hEdLyTzlUWON78Q/+vFY0ZRGW4WpgEfoNZCbh6Yu0W34yQf6QuHnolirjHfBKiGsGqrHN9 BCbsuHVZfuv/JPkx20tgkPyE7pNZecs= ARC-Authentication-Results: i=2; imf26.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=l4OYe1+0; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf26.hostedemail.com: domain of ziy@nvidia.com designates 40.107.94.47 as permitted sender) smtp.mailfrom=ziy@nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730473417; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=suGnhTcndxZ9XndQoK2ebKftv0rgruxD+X4FouvzF9c=; b=Capmmqdva+Ah23dK46EIcN6lq04oBAvSYNabeer2anRThiFA6EM739aE2YI+eShUwPpzAv qNCWfbvs7dQkCfhOlj7Cu/BWWkd+zO5F+CO7REYCZeTBW+X37lJM60wmyk6z61qPdDkjdF Qt5B6nMljUPNg8hyg7FTBPXQnBVzAwI= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=AZlRRYN70X2vT2olV8qJKpPEvrbACTgC7ac1YX3T2T7PjOK4Qj/19trLjsc9ngQwOdv188OIHSaKtwG2PFh4WVl2myOwANQCHFmmqu8sMJ5k7Spge7CxRGP1ygOM7aR/4sEPbOn51J6k+XJeOeYQQgawsmgySXk5oPpv+bVBhNsbqub5ABExyPEDvGakWNd1yQm3fORDNfD6HvyB2QB0PuVO0l3gXi1dy2FjN68yQ9SI83PHP2EFcC0tiN2QLVp/w6DRVMJdv+osgfpWqHwkbuLoHpKCJ/FzBbgszj8TBgfrZBmrbMCpXWP/B4m4f9Ec/7pjKwm0Sx/6wgZjLiey/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=suGnhTcndxZ9XndQoK2ebKftv0rgruxD+X4FouvzF9c=; b=CdM1W1g8jXCKqKU841PkUewl6/czv1xW8JnEz5IXgqog5nVl9tpmwuL3gIZdmYapwlXlt6Q7aCLrll1CEhK9UdDR7JIcIRIFeLoG9/dOprAx25hENOEFW+zPC+rswFPSAO3J5MV43x1+3WHIz1ghdEX0PzP9dCFTvtmRp/GR1d0oiJSOw3ExczvrlBqlQPqaDmApkfvrUGFhwCtNBUkJ43ZmD4Z6qx/H4waOis1hAfCfOCTI3NR/PtPeqXn2g/PZXXJ1ZZV1N5mUx+Z2WSeU73lWfzYYjbAdNMEVPfsj6SjP8LKIl6PNX9kMvTPnoQq8ZmX64LNXPUDEMH8/UU8JMQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=suGnhTcndxZ9XndQoK2ebKftv0rgruxD+X4FouvzF9c=; b=l4OYe1+0RpmXoQxhZ4LoAV/5/23E1WGMrIhO8IDZpVdgIDcy4s40ON4XZ5oojkEascDWqhMPPakLgBTRUCNhbGInHczUG0/V/Cr/N81SJ2uKwOD81dtt4YYLlXsr7aGB/ORuX4lLJ8oU3lFjNpyL6q1cqKCHjEgIsG0IWv4aniWoeRQaHEGaC1pQRsjDn7uxbn3iT0tcQsp7u2yOhRWn10GGd3WS4yEkY8NEusfKrEAN3BloIHyNErL1suI2KGx317Ls3bxyGBnPZuaAxYCpzMLu6nd9rASK0WQx0VqF0o0y7PMKp6txmb3m4ssPPmcPKF0lwsSNbQ9ztHSX1H/zFA== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by CY5PR12MB6621.namprd12.prod.outlook.com (2603:10b6:930:43::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8114.20; Fri, 1 Nov 2024 15:04:15 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%7]) with mapi id 15.20.8114.015; Fri, 1 Nov 2024 15:04:15 +0000 From: Zi Yan To: linux-mm@kvack.org, "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v2 4/6] mm/huge_memory: remove the old, unused __split_huge_page() Date: Fri, 1 Nov 2024 11:03:55 -0400 Message-ID: <20241101150357.1752726-5-ziy@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101150357.1752726-1-ziy@nvidia.com> References: <20241101150357.1752726-1-ziy@nvidia.com> X-ClientProxiedBy: BN0PR04CA0134.namprd04.prod.outlook.com (2603:10b6:408:ed::19) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|CY5PR12MB6621:EE_ X-MS-Office365-Filtering-Correlation-Id: f2b5bfad-d154-4ba6-0c20-08dcfa867283 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|366016|1800799024; X-Microsoft-Antispam-Message-Info: KODxz14H2Z+04WtnCRS86SP8e1cgftFdKIpf4i7flPSneg38dPYqkHJp1kGknd91pt+kJRKjr+XSIz0dMUwUE1+0kHFfJaYnaMqxs7fn+1jidOQjPPitpyakC8taqBtRKtmxB8z2iRQ9GEWPRnlNjZ1d9lkkGKFr/PP9dv34ILYWWXkEPpAbkrtLWfMW8CENd3vgsREFb+8nK0+JoSyHm0DEzni2YO4I72g1zdfn/OX6yyDdo+3ZIfFMQdfqDgBw++6riPaZDGKyQBeYxZsOSne5kDfAzlyX6VVvo67fy3hgL1pNtVLk/1tSc2xkFBRlp69BRLJU82hyrxi0m5LCwPEqwpdnc8mOkZYEtWonsAvrdXX6D4heXVDuRzCOp8svRvrSBAgZksHOknYWIWMUwkx5qa72XMVzPhTT7vlYwbDN/Q9I26D122e9YHKFkA8Vr/c9jsiPmsP9xMV08DP0EzTVi/CKGh6xSXsWWvlaI9bKbm0HGaW5pfDvXNAOCSvvO9iIAHEZCqD+yZL6VaOnPF2oGKJ0o8RPjfygVdOQ51zqnYAphPe2eVYm44Vk6TDTVNEX7ewKirVlJsv2Xf/BhUzBTf7NwwxNBvO0pAXpcYsLw+7bUAAVqCJNBNokh903HEOPzTL2DwECK8CvlUir5iWuNqeix3Mps0akUL7EIJrei48icXqYdljKm8AqnUW0ulC9FgyTheGQsfyW0tFIJ83csjhxQDvhivJxzP7M7rDsyqS/90YxnkJvqaQfsuWFfPqCdmg019Eol5i2wGszjoJJN73iN6dGDiOPlwqxTBZJKmRM1WSIjI2rcFqcffl0nNlnMjdl68wCQzqUp4GFCUTi+pPVOYIPC297tcmgX75R1gpi4b2yp74GbsLPoRnnxuV+Ha6Kq5Ed66dLrQrZGCYwDaURrokTjAW3mxLYVKFd/igP7g8vBEv+807E0BPnSOQvXwMPmDEow1uI6IBPIDrrsvw0zq5EHmeCXawMlgdHyxotTcXPD9xikg/sgZl752J0tL7OnmC+WfYCudOWga0QfJ1YzPrH3fQ4DGWnepnFPUMfOYdWhMw3Lh4LJoJP/sdY/D0ysdkQcxqDAWdBBIXYDuXJbPmmrNBdm//YcXIOnUn2ed+5DYN/MdOdVfvhYrJLWB5gBMLgqHEg4Q2CatQHEv3yTTYe7qaqx3IBp9XQ7sdAq/rBu/yd0Gbj8KzXH/FqwyiE9+XL+BVtEcY3Jp0tsycagEzhIOLVlJeGGuniM0ykuX8VumndIBR8T+lFSkbaSVsX6Qu4Q8ZuMHmwzthCIat5P0VP80ZGt08tYBN3+yTDr1AKBuHTa1xMKdI2 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(366016)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Yec7W7Z+gHe7zdlQHiexxmyyfpYRIIHe+i1r0Gf0xYf7hkwtTsGCI01vpTT2j9txV0KD38Y9DdH2N9sGF5RXBXFKlZ6xIXNjOYVDUThl55FWuGusYwwbI6eg4WPHMQJUMmF4KeVzp8Gym9Gnryf/K3rY05kDzJMG9xqbyXvu6B4JHYST8usP/j1/TlS1CrgTn9b0l5eMTiNYfJNZ39jpZBYk6GcERaqsb2ml9Orsd3ok3yMpvdZb6XaUYYot+lYPj79Dc06yjtMBCQEVauz4Ydmo1XB1G5HLMuk1wNJqP/CHg1ohPsu5qcjx3+cbw0v+vxyDuJeqvLsm0z/yKaOP+X7gC0CiUQCW6oWUJFfSnmnK7CsPClJ+upvLx1FTlcl6cX94iTt0DJ6Rp7R54GXYPHGDH6P8kdbudIcdtdRWFZDcFHICwRKPUDNNklKgpmedHIxLlnS2pgJKaktByKTaVFz1RxmIcvGau0lOPP295rkopWqUa/E+yJ4lfwHDkEb+qJZjC0QOoTITFRbkVgrI0GUv73f9H8qi8M5ZqS839f7kUb4hqnQUwtbMk3RfplIyD+kam0AuAUSfjA0G7l3WDJInxKH0clXqQB+LcznesXb01DbrTwcvurw8A/ljjpskDDtVM/3Fuoi7TiF41CdRGuNjCGDDd4a7V0ZyzwmuaDqImm3SxYF9vE4qLAVLW5BeN25u6Wml2fA0nNB1jJyQWj0M9zIeECru0KUlzEmuI5+T1hsp2LDgI7WORR6HtbONl41Foi4NBNWxOMjg+lT4MzqtV75iWCi1fGK3fVcn4rGaQDSGZ+CAPNnIT3TflA+1D8+pMeG9MMpJ2dKLHt3dsFbdpZWYYLGoGUTe+1M7xVdyGZ9k4nBu/EkzCFPDFFAA4WYE7uv5Pq4txt5zqjKc8dDUYzbfIVgDKtK0/0xuIXFIPGMbBfuoiixHn710UQjZR/LxntCH/vEOLrprlOfmv2KfSWt6+dn+07jp4VjNe9UNOQK+VbYmiJFSDdgZS5zi35g5U01ybZyf+71zTa+wZsoWUbzD+qGVWQjTyb561Ysc7XHKurJr8ls6t6G76uKpmHZrxy4uQYGnFyNItmbt2z7KhTiZP1sYHEC1yF80uT/9yCShErHtAqdJXVVtiZXMb8kfmDzbMvs9mSZbH3Oznub3YjFe5/JmDp3cQ7H58R0bI4RXBTjy+p+eEmiTuMG+jfJUMRUvCQpjiMAErwA4nlwtn41KTksJ8ZzpN2nJSXEcRZgIEb0A4lym5HTfoLm5cBB0mVoAn+3+u32pbTWgnX41H83YxyoOLBdEOd2t4Rzj7ar42m5VftERUAo4iHqsiyIMTAZUN1pJxjgRN6WCmtpueryXfPTbcsfDoksl3OLh4bKLFG4zlUGb53i1SMzC74XlPfSBPLfY5pOz48Tedoo5iCql4ravWakKzuQjjCGQ30ARbLGe60PWC+jomkm24T76NQlF6KyexWmYdr4s+Etkp6eBmjv2BqVKB5e7KdrSs9GbyltEDaaNdm9VIYbp0k9DUvNB3Njh4DqIACqaHG6CG+jdg9XUqZwjl+hZvFi3zpFA/ZVVfNY77K1zLsNW X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: f2b5bfad-d154-4ba6-0c20-08dcfa867283 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Nov 2024 15:04:13.4858 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: lqibmb7cZzV3h34dzGD9OwOJR0D6vQs1p1EIJjCro09XO1IaUwX8UsAQLY0MlVA8 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6621 X-Rspam-User: X-Rspamd-Queue-Id: 24A56140027 X-Rspamd-Server: rspam11 X-Stat-Signature: xactua6qrt193x8ityjfjmf5pd3t97e3 X-HE-Tag: 1730473451-447793 X-HE-Meta: U2FsdGVkX1+6kF+tltbbJgc387JDzFhG1j8WVVxNQ4SAi0pzIXi29CILIcsMu5FS55IY8CYjN/xRn2VCLhRRUw4AEKNKI8dameuWC7qZ5JH55DQFg38bc5H1GX/XoDh/SHRZMV/8KqyCjAaTIZvS6DkLrBqpuSd769P9pXJgOdW/wIXbU5nEJ8Xxl6JlS9va2ZoljEDypXNSaCxy2h8EeXAUapOo+PNU16rOSEM9PMyaZTL3qr9fx+GkAu+YAfa/pZhcnJJ0RkbHwq0Hghgj9+++NvjUoo/XMoZf4yDcpjckXMk/BSne2VqIqKk9N7qV6KwItVWipxEloosp1wum/BS63u8RA/wHrWwuMpK+PTqmqeS7g57WiBQAySrYHR00Q6Ah5HCgja6qJDdZNna1rj7JLAjL91xjQQ2ihOHm0ytT0fP6ms1+3LqXnN5G0zk3LKMjrGJ6PBz+aJhK1CnBZhWLsjSXM4gAeSqHC+9c57sunFqWUZcTcVug3W7QQByVc6CmNtCKgS9Cuq+quypRdhAxsF7fsih0240wzdjYNIpy7jlmGycrNb0LQNm2sQtVTtQOT+RbuE/z6INjAgbqM2O4mtWRebRuTzdG5o7usbiKatRbV+UYsgEzvPqLAl0N1PvlSqGJQYWPtVSWmABwu0sHZjByvsZm16mBzEYu/pPvh6kELtYPZaDMsVyCnbS6sDG/8JUR/Wear0RLNVq7i8+ZwRU+R+S2AmCqMACABGhAdPOj/3hX2omUES5SKrDGcvDnP3n19zz7vTc+uMREPC43P6WqLIGBVsSxkPGb+rOLbUSUlMhbdNax/w5mPVypCp6lpjv452mgZSVSRktkh02f5YAnR0jNhcZ48ZRz2zgLDRauaTaN+rcxgmIPlwF2EfijUpFaEcYqExCSmEuYEmf5bJwoBJf/vrjXLAGeNUlWXUwqjmoQQ8irSwpfQGBzaYDSygGZCrDqO2OFpRo htP5LmVh 0UkSby3mK6rzP+tthbLBQk2ykYM+KABw4st1WH9Pge4e2f4QOFDw+9jSSodYjbM9lIZZH3NdHDipApJ4rzZgZ6QSy3wLtSFcFS4hObqEPC7yM7KXRtIfAbUzS5oD8lLUss/+AaMUvAutMpntfICI1E+zAzzWc98+5TP73GB+4BZrZXvf7UYwyDumZ5b9s6aWfTgX9aZo2uYk1exZcPSaTi6Ar1oYzuNIAcMWwfqvxIwDAJ0vXFyjRENrKeFZN07F19vyfbA0x3wXDyi+aaEP5uMcyJq/x8qquCefchD2wGN+ekp/ohjQIqKgLHppIAGPZ96DQS61AwNYtfxJhByv41+t+dA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now split_huge_page_to_list_to_order() uses the new backend split code in __folio_split_without_mapping(), the old __split_huge_page() and __split_huge_page_tail() can be removed. Signed-off-by: Zi Yan --- mm/huge_memory.c | 207 ----------------------------------------------- 1 file changed, 207 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4f227d246ac5..f5094b677bb8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3154,213 +3154,6 @@ static void lru_add_page_tail(struct folio *folio, struct page *tail, } } -static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list, - unsigned int new_order) -{ - struct page *head = &folio->page; - struct page *page_tail = head + tail; - /* - * Careful: new_folio is not a "real" folio before we cleared PageTail. - * Don't pass it around before clear_compound_head(). - */ - struct folio *new_folio = (struct folio *)page_tail; - - VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); - - /* - * Clone page flags before unfreezing refcount. - * - * After successful get_page_unless_zero() might follow flags change, - * for example lock_page() which set PG_waiters. - * - * Note that for mapped sub-pages of an anonymous THP, - * PG_anon_exclusive has been cleared in unmap_folio() and is stored in - * the migration entry instead from where remap_page() will restore it. - * We can still have PG_anon_exclusive set on effectively unmapped and - * unreferenced sub-pages of an anonymous THP: we can simply drop - * PG_anon_exclusive (-> PG_mappedtodisk) for these here. - */ - page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; - page_tail->flags |= (head->flags & - ((1L << PG_referenced) | - (1L << PG_swapbacked) | - (1L << PG_swapcache) | - (1L << PG_mlocked) | - (1L << PG_uptodate) | - (1L << PG_active) | - (1L << PG_workingset) | - (1L << PG_locked) | - (1L << PG_unevictable) | -#ifdef CONFIG_ARCH_USES_PG_ARCH_2 - (1L << PG_arch_2) | -#endif -#ifdef CONFIG_ARCH_USES_PG_ARCH_3 - (1L << PG_arch_3) | -#endif - (1L << PG_dirty) | - LRU_GEN_MASK | LRU_REFS_MASK)); - - /* ->mapping in first and second tail page is replaced by other uses */ - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, - page_tail); - new_folio->mapping = folio->mapping; - new_folio->index = folio->index + tail; - - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(page_tail->private)) { - VM_WARN_ON_ONCE_PAGE(true, page_tail); - page_tail->private = 0; - } - if (folio_test_swapcache(folio)) - new_folio->swap.val = folio->swap.val + tail; - - /* Page flags must be visible before we make the page non-compound. */ - smp_wmb(); - - /* - * Clear PageTail before unfreezing page refcount. - * - * After successful get_page_unless_zero() might follow put_page() - * which needs correct compound_head(). - */ - clear_compound_head(page_tail); - if (new_order) { - prep_compound_page(page_tail, new_order); - folio_set_large_rmappable(new_folio); - } - - /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, - 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? - folio_nr_pages(new_folio) : 0)); - - if (folio_test_young(folio)) - folio_set_young(new_folio); - if (folio_test_idle(folio)) - folio_set_idle(new_folio); - - folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); - - /* - * always add to the tail because some iterators expect new - * pages to show after the currently processed elements - e.g. - * migrate_pages - */ - lru_add_page_tail(folio, page_tail, lruvec, list); -} - -static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned int new_order) -{ - struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct lruvec *lruvec; - struct address_space *swap_cache = NULL; - unsigned long offset = 0; - int i, nr_dropped = 0; - unsigned int new_nr = 1 << new_order; - int order = folio_order(folio); - unsigned int nr = 1 << order; - - /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, new_order); - - if (folio_test_anon(folio) && folio_test_swapcache(folio)) { - offset = swap_cache_index(folio->swap); - swap_cache = swap_address_space(folio->swap); - xa_lock(&swap_cache->i_pages); - } - - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = folio_lruvec_lock(folio); - - ClearPageHasHWPoisoned(head); - - for (i = nr - new_nr; i >= new_nr; i -= new_nr) { - struct folio *tail; - __split_huge_page_tail(folio, i, lruvec, list, new_order); - tail = page_folio(head + i); - /* Some pages can be beyond EOF: drop them from page cache */ - if (tail->index >= end) { - if (shmem_mapping(folio->mapping)) - nr_dropped++; - else if (folio_test_clear_dirty(tail)) - folio_account_cleaned(tail, - inode_to_wb(folio->mapping->host)); - __filemap_remove_folio(tail, NULL); - folio_put(tail); - } else if (!folio_test_anon(folio)) { - __xa_store(&folio->mapping->i_pages, tail->index, - tail, 0); - } else if (swap_cache) { - __xa_store(&swap_cache->i_pages, offset + i, - tail, 0); - } - } - - if (!new_order) - ClearPageCompound(head); - else { - struct folio *new_folio = (struct folio *)head; - - folio_set_order(new_folio, new_order); - } - unlock_page_lruvec(lruvec); - /* Caller disabled irqs, so they are still disabled here */ - - split_page_owner(head, order, new_order); - pgalloc_tag_split(folio, order, new_order); - - /* See comment in __split_huge_page_tail() */ - if (folio_test_anon(folio)) { - /* Additional pin to swap cache */ - if (folio_test_swapcache(folio)) { - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&swap_cache->i_pages); - } else { - folio_ref_inc(folio); - } - } else { - /* Additional pin to page cache */ - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&folio->mapping->i_pages); - } - local_irq_enable(); - - if (nr_dropped) - shmem_uncharge(folio->mapping->host, nr_dropped); - remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0); - - /* - * set page to its compound_head when split to non order-0 pages, so - * we can skip unlocking it below, since PG_locked is transferred to - * the compound_head of the page and the caller will unlock it. - */ - if (new_order) - page = compound_head(page); - - for (i = 0; i < nr; i += new_nr) { - struct page *subpage = head + i; - struct folio *new_folio = page_folio(subpage); - if (subpage == page) - continue; - folio_unlock(new_folio); - - /* - * Subpages may be freed if there wasn't any mapping - * like if add_to_swap() is running on a lru page that - * had its mapping zapped. And freeing these pages - * requires taking the lru_lock so we do the put_page - * of the tail pages after the split is complete. - */ - free_page_and_swap_cache(subpage); - } -} - /* Racy check whether the huge page can be split */ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) {