From patchwork Mon Oct 28 18:09:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13853819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E341D339AE for ; Mon, 28 Oct 2024 18:11:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF7956B00A1; Mon, 28 Oct 2024 14:11:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA47C6B00A3; Mon, 28 Oct 2024 14:11:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 859476B00A4; Mon, 28 Oct 2024 14:11:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 59CCD6B00A1 for ; Mon, 28 Oct 2024 14:11:44 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id CC341AB99C for ; Mon, 28 Oct 2024 18:11:43 +0000 (UTC) X-FDA: 82723803372.10.CD6D3D5 Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2064.outbound.protection.outlook.com [40.107.100.64]) by imf09.hostedemail.com (Postfix) with ESMTP id 6C4ED140019 for ; Mon, 28 Oct 2024 18:11:24 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=EFlYRX+j; spf=pass (imf09.hostedemail.com: domain of ziy@nvidia.com designates 40.107.100.64 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1730139021; a=rsa-sha256; cv=pass; b=O6NHgzmdNz3K6dQkIzp7nfXLhCFpC3C2I15Nnpalg3A15qIevIW6XafhdMYOH/7HJCwlAr OHYAn7UyQP8zlaaPKmef+amvEBWKBfjuI4FoNXPNt3o1srYd8CLn1rS6UlkG9lQJC9GmGy /f/SqlEITKHqhu5Ipo1OZRR976f/D6s= ARC-Authentication-Results: i=2; imf09.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=EFlYRX+j; spf=pass (imf09.hostedemail.com: domain of ziy@nvidia.com designates 40.107.100.64 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730139021; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tWbxoUTlysn6+s9pyYwnQFKvbkOuokF7+PjqeWI/yJY=; b=7Dk3n91CW6MN1LClEYfVRB4CGmI22fHz757EDOmIUf83xqALA0E3zZhYFL19uTnC7Vn2Wp bzJ/KDjSGtus/GYe5iWwiSR7FLmWwIopeX5+Q+APtWA7dhe5QBnTv76plI1glXSa72ryLB Dvnxb/LQfEgT4ueJct/4XbSUlZ9DXyc= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xHDDeu5CSHczR+PJgw+iKhN0R8bD5XB2TljRxi/SLhIhNuizaWclOs2lUwlw+/vtQ3QSkuNkgecaQPK8EF6EajmzT2pSepJRljXQC0YcJsw+Mc/joPoLEBARkrjoO3AtyWYUqE8vDSLDHG0jiM5GKnyNJ2ckq5BW+jIGBiv3xLA8CwuMBxbE6e3o/wuqKlrRpw7OquFHM6EOwxcEskrGCav8TZzohORDUYJfvb1qT/SRBW/IdF+dIszMQ2ItGJlXOuUaSxbfip49gnJc9ZytRQ90VzDgo5p1wteMH3AN4I3/Fg64ZhqJcRgTgMKYMEgOp2WJcbR+YpQK7Gd13cVoKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tWbxoUTlysn6+s9pyYwnQFKvbkOuokF7+PjqeWI/yJY=; b=H+uWv0waFNphAbQ/8yOJHjlPyYRElmqNcEXRM0iB9AzY5X+OWy7A6JqmRG6zKjNqz6O2x0ni8eCa/cnNhv/I7jjp8gMC9Ndn6jjTlDR/dpTuW+bT1yAwAjGLfqKO53V6D0Ujbj/L8OnZg0Q36cXi0eVjuA+x9PLIRhcLCmm5IrzWX0AMqnWGK1egXko/SIocUXp9VWcB6j+id5OxUq40ITlxNV9naqVJ8lvtss/lTDL6i8vKog7IYT/AMdsU8ku7Jod5UM9UsOI54vUuE4ZhtJdUFqx/B9+KiKJsKQKlxq9JdoXOZLby9PzucvVKe00E+dfiUju2kJ/Kll3k+eL8Xw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tWbxoUTlysn6+s9pyYwnQFKvbkOuokF7+PjqeWI/yJY=; b=EFlYRX+j4T/82muwz/Rg0b2jDpAYWeuq7EVDdin5j/W+gQ58xdYxT8PfhidN1dX8hzla1qBNQGXr3NxgSKOFfELOLe00D7eD+CsuGqyaV2tyGoM3+Y7E52sgSzaV22eYI3hh8q/PTpp+ycrGSaoQMUefSf/odIFtxuXxqRi0zkJQuQCR+vcezIADncYhkRxk4YiyBC5EG9vO3MeEW+jWiS0QnTFvyeimL0gUu21q+bBdyMGdsY4GmPCeYdGXxwhTyT73hCS0pTgi2S4nggQTEAFeNGISxIa0dmRbOVydZEWEQMq0njLmpCt5kMNG7ZdwI3lk3K2hxnS6TgkM3sRH5g== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by CYYPR12MB8701.namprd12.prod.outlook.com (2603:10b6:930:bf::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.27; Mon, 28 Oct 2024 18:09:57 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%7]) with mapi id 15.20.8093.021; Mon, 28 Oct 2024 18:09:57 +0000 From: Zi Yan To: linux-mm@kvack.org, "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , "Kirill A . Shutemov" , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v1 1/3] mm/huge_memory: buddy allocator like folio_split() Date: Mon, 28 Oct 2024 14:09:30 -0400 Message-ID: <20241028180932.1319265-2-ziy@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241028180932.1319265-1-ziy@nvidia.com> References: <20241028180932.1319265-1-ziy@nvidia.com> X-ClientProxiedBy: IA1P220CA0011.NAMP220.PROD.OUTLOOK.COM (2603:10b6:208:461::12) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|CYYPR12MB8701:EE_ X-MS-Office365-Filtering-Correlation-Id: 92dd34dc-c917-458b-5f14-08dcf77bbaf7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|7416014|366016; X-Microsoft-Antispam-Message-Info: fAnwf5kevMysWzAuxK0JSPQ4ayWeGaQBTtMQonDPKD/Ko8HaPu3Ztx1DPIpK1BW/13V/K1erxMkGcOhehSqQJA2Fz1/jDSag70srOcd6dMaEPkU5fN6EvWu0Kg1CcgEdQXHY7ewYabbnMDLPdcqNYjAsh8C7wBY1QT+VZRIs4YKT+4Q9OA/srTFGc8t3fXNbwtIq9vpoNi3649y/DmCVEmGcugdHoSZQPDQ7idRsa84WDmQD1z1ncDBvjhsfOfwugSjwkRlI3fNLuq6/xITwiAYZBErNy1Zn3fs3pjICDDqdkTtJxSUXcaVaddWSsAXH6beQNlAPD6DhIqRnwzdeD+G9MmqvPdU/osQ2Slq64mrVatJ0ZBDT+4SBFB/VxYrwE2U+tLix3mjbT3mzO7bKnI+9j0HHwYIvAT8VWBODtnm6koNfE+sPDKpaeboPxWGIYQR6dGoGWe+l3SHlZJWj2YaIAlDRl/x6L0SCYLCYS5tl5MenBHWb+RWAul7WSOMmHl/xaDPnBU27em4wJ9Z+rJOnPplqZbR9jMTzTNjl8w4T05vnVDdPS+1+srjnSD1FJ8GanGHS5JvZkr8hgDKQO+YA7Qz31bOj7xxe5cFgjCdfou+dJ0Pw+mIdGLPWgRrbDQg0n1fHMs0fyRSYCcmln7UBtcl568dkWBa6k93ky2VT4HW+ibJ32yGug/sef3VUs6jrdSItDWabbt0Kt292GBYc3UPjjDNym8XD8FMTXTg1dqEQh4JSfDMh0zGfUmF2eFtEnf3rEdrouYrvZISX+sOm/5n5gltipXAD10dd4tV0KdoVehPeIbN4sRja5OAVXNKrKnb4RUiPTYglPzufTf55s6XASVwELiZkrk/GYClNMur+aqROsjyE9iLyAjxrSFXHOBhe8vzx48fjsDWwOqhpUNQb37UQUgS6qSDWm7ugEIJ8izk4NTd0/DUsGn8ozlWy8l6HlfayJWnR1QSphtio3TW/Of2nr/OUPSprLiP5fYdMzY1UGBn3MuD5mcBr5rauUkclGngFO5oavu1ehZnWidI9FyBVfQ6RKJ5BaYmOjRFhZYmrXRP64S3cQoaT2FQ9InKlsMiv1A2kBtGnsTRr6e7T9xoQKagMqCHw5dEp4C81VQC/vqKquaTi4+TaNFwiSMeEwzYU/EJFdKXg9xH1n3ilOjZH8frI3gRQkrpcA7vtape/7nQtIQaogFePXq7zUtmAEUs6x52DyoX/S4XHyKNrAQNmmDKYDj4e77nhqJ962fWEFJsiI/Vh6GVRq1tWxpaHWf+s0VGc2DobOlz0txeRnXvvTC1eTXR/dpCuG2OWO0ki9tp03w8klBA3 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(376014)(7416014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Hnr+nL/IHiiKx05h6wk0JYit5l+UGuqUBPbFqPr3LBpG8UDW+KOHEcSpdvDLGqripA37gdQiFOZX11yPOuLu/LS0oI1w17VyvUXfobDJc0P8Iv9ppx2VN1LvV4GRCiblhKRn/7rO4xCFdejcinIbEKojXj+6IWrJOkjo3O0UVIW5jU9YPSacMZlU2XZM2pyLUHJj1CXo73zOEoNlOCbEx73MvtqZBoiZzOpAwiYgGwu61Q4mk4Emp5fF62f0M0t9P1oD2Ra7eFikxehwzf1zRBuen6VIKPoeNr98QJ++y9O0CG4i8bqDpIpT+XWQ65Ngd39TDnSy+LTfJDSsHw9TswEtsmjhDdxwrtJXmyi4F6PuBTgS17i2Gd+TaCZGPu5EjVXwpOXkPoItFyHOFeoJd5JmP/I27rgTIiPBIhTlYBEE/LkH0tBGjItsBbtEd+PzXKRu6CBc183m1uwTHSDbXN1rwE4MrP0e0LXl15egXMvplht7LBdhdr/OgdvBTDywcg6HbCc+gsCPJxyDawywRPSqxzlMnBLldyn6/+7Qh6f1pOXbHwHqGQF8q4vYM67xTe/XjbIx0eVs5dveCiauUvpRXsIBHd/L6toGjz0fNPkJHz/lmmSQ0lstw5OuzNQ5HkrtAQEtkkVGowzxL0tbsAgY0qfRhcHKZfmqLal0XAsaQPXAxCgGXZepG/G7Qul+hmDtPlu/eGeSoJtu64gxI4OxaXEGTkPeZlneprNNEu1AVZF9koddPaCjznz+/VR3S46Lar1sUQTbfuL9vmCZ711ObGR9vRNM+CfqjQX3wKpLic9s0HZ3mM6G4uagSZeIG5m8/4ZQRkP6P3fCBonfYOobbFyxT0BAzN8oXG0u2QBPwPRxF5zd1Wk4BdPUHE0+TFrNT45SC9bKViRPc0FAVDARbTGVeBL7wi82Rx4fpwILvanLzREUnGb/bu29GSE1gkz+GaXiqIsGFzhE2iCyxamZRkoY20fTEl1rRZfIcVImD4EGDadYBxqX5U/SIwbvby5EpYbVpG9ssH2eEpIMm5M7muUASfKb4rx1jeufMlOrdSwHnrXdUj6l+Td7BSjR6qHqP9Ikn+x+wudLxytJnfD/WfpfTsjFHhNdHJi1CGj5shKAaThx+SGXg4OSjLtUvF6NU9+H1BEq2SN7i+1Sxi65UBV/BZsi0ownyYpjUwcSYKm8xGmuiz8AwypwBAoOLVcTlH4JFxr1wLfJpjiMv1uCmCo6nzaK9dYjCrEg4c1ICJcWcFQodSnk019sDI2pls7qxUMzCg9jXiJevQdfc8wss73jsA1JUH8lp5bYCwpYru+294H/lttitf4hJRGdoiV9rZJNdm8nHkN3O1BziZe3n6Lr+poQ9Gk3SfTspFOJ0PVv74+sxymnjHsdrqNk3d/NJWZfI/0Cyu7tebd9h8M4LP0r/GgVw+7MEWfTGZartY08sZXsV7Utxco4s6Dj1n3OzVFVY6dPp4zEG5ApDS4ZkrruNQXDifk4Ev6YMr99FNkKuUFVgHcUgnj01y8uM4E0d8z3HkGvtijGov1GiOBfAy7IGIGjj2cZ0TWCLSBDjq9ZZFHSo6rFPw3QH+P/ X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 92dd34dc-c917-458b-5f14-08dcf77bbaf7 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2024 18:09:57.1195 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: FlYCNtvEdZAECMa8jFAHZmCukjpwfmwYSGEiTWGMN4lseJl5wLr6+aMlyIiFN09E X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8701 X-Stat-Signature: m1knbjfsamyhg8hc9sn6xpt4wqxa9jao X-Rspamd-Queue-Id: 6C4ED140019 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1730139084-173271 X-HE-Meta: U2FsdGVkX1/EbRBsbAPBm+3Yrm02T2sNLvVTyoQRISKqgwrgkPyeaxz9K/bif3lTg+lluLe70fy6xl0+doatCdP02E4MJ0AmVqQvg9a8AEvhIoEf1lzq/6xggN1JXQ/7lPfBqV8AGMZIhQkulQeyoQivLeaMIWGES12E92lkrHLOeaAxH1LyCi8XBHUntGUcFy5DvuzkaF5s2Y2gdRsoVriiYNlIBSSrI/JAS1jZ8l8jQKnEnZLnDFJYeDEQ0EZbdBg9+fXMN02hTUZ2VwMq/8VOg8zhA9Pwir8QLf8gYU5m8YXRO//9wKxA3lVd8pVvFihquck+mCvih4DqokFYNfw3GWN4UpzxMTASz6PIiFjYE4WKuyImTatVIC5Oc4fhO/KG82Xh2vGpFPtZIz0vYZ47G8sSEDF0vOHX4vvtodrDOmHz4G0pcAv+v/TJ5Hj3NVgsONxAGiPNviWysrLkG/YC7t5FvsYVLPt0cBEqUOwh/ZitXe+lJR6/wUH9gTgSO6zGnxvGAgtxyYSV5NafmeIS6K1vr+yQwi0QO1SAMGiHkimj4prtvysQO0mfOgOkVMs0F0MDxoqfnyFBo2+qtSRGrrC5lLwS/Xy0KJmQ8kd+6K68lYjBLVbWzPoUj1FL/T+8ZwzjUJLP4tnzDp6ogajXS4WHfNYPtDzGG/mTxhYwNh5wC279m1h0nEnk3DVwkqnDbV0+oHov+mWHnHH/X7WcUoDEA+3AYI2zVuzif1DuHZd1nAdvExC1EoP/tGVz7xEhzxV8rOlh3iKQfwGMh8cX2yHNUeA9CC7ahAQD3IfbewYAQ58WlaepRx8ddzEk1rNdvSY9LhWLcV3BgpkhPIPW3KlQS3TZLC3KxKeb/ZOxX32hrU2vcaRFQ09zWrkFFpCzUBlXyPW9zh2ojpJdctXQUsDDpS+yo3Yjf8BY5dPFQMnbFdjnVmZhjzcCuAUEk4xqfEZy5QuhRVIErlj AgPtAg5F dudLxBjFg5TilJwvjW80lavE8BLWRxW3Fb5kUWFJdDiNE8pnSL8zNsprj70d3EgrG3N6uFV0ntMnzsstl96IdOw1FEDh6p4vncVTsVy3/QPAqmnBmDanXITJmWz51UhkIo7n1NrbCpd40Za63EMYuf/rwfiA43SqqI2+5AywIs9YLOCo3Q9dL5Q1ges349NWgi+N3B2AxyJXabvi2qfmH+6DK2//BstjFJny/Pj8zLqAhufb560ki1PBFTqDbOT4JJV3tq2WJEypXh6eYit20vVubGvNN8ZY3X9cqP8Hw2PUQFYfjct7paprP8q8EO9A6wP8OGKZI8Wyrqu0SKXXjdD1Mlw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: folio_split() splits a large folio in the same way as buddy allocator splits a large free page for allocation. The purpose is to minimize the number of folios after the split. For example, if user wants to free the 3rd subpage in a order-9 folio, folio_split() will split the order-9 folio as: O-0, O-0, O-0, O-0, O-2, O-3, O-4, O-5, O-6, O-7, O-8 if it is anon O-1, O-0, O-0, O-2, O-3, O-4, O-5, O-6, O-7, O-9 if it is pagecache Since anon folio does not support order-1 yet. It generates fewer folios than existing page split approach, which splits the order-9 to 512 order-0 folios. To minimize code duplication, __split_huge_page() and __split_huge_page_tail() are replaced by __folio_split_without_mapping() and __split_folio_to_order() respectively. Signed-off-by: Zi Yan --- mm/huge_memory.c | 604 +++++++++++++++++++++++++++++------------------ 1 file changed, 372 insertions(+), 232 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 832ca761b4c3..0224925e4c3c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3135,7 +3135,6 @@ static void remap_page(struct folio *folio, unsigned long nr, int flags) static void lru_add_page_tail(struct folio *folio, struct page *tail, struct lruvec *lruvec, struct list_head *list) { - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); VM_BUG_ON_FOLIO(PageLRU(tail), folio); lockdep_assert_held(&lruvec->lru_lock); @@ -3155,202 +3154,325 @@ static void lru_add_page_tail(struct folio *folio, struct page *tail, } } -static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list, - unsigned int new_order) +/* Racy check whether the huge page can be split */ +bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) { - struct page *head = &folio->page; - struct page *page_tail = head + tail; - /* - * Careful: new_folio is not a "real" folio before we cleared PageTail. - * Don't pass it around before clear_compound_head(). - */ - struct folio *new_folio = (struct folio *)page_tail; + int extra_pins; - VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); + /* Additional pins from page cache */ + if (folio_test_anon(folio)) + extra_pins = folio_test_swapcache(folio) ? + folio_nr_pages(folio) : 0; + else + extra_pins = folio_nr_pages(folio); + if (pextra_pins) + *pextra_pins = extra_pins; + return folio_mapcount(folio) == folio_ref_count(folio) - extra_pins - + caller_pins; +} - /* - * Clone page flags before unfreezing refcount. - * - * After successful get_page_unless_zero() might follow flags change, - * for example lock_page() which set PG_waiters. - * - * Note that for mapped sub-pages of an anonymous THP, - * PG_anon_exclusive has been cleared in unmap_folio() and is stored in - * the migration entry instead from where remap_page() will restore it. - * We can still have PG_anon_exclusive set on effectively unmapped and - * unreferenced sub-pages of an anonymous THP: we can simply drop - * PG_anon_exclusive (-> PG_mappedtodisk) for these here. - */ - page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; - page_tail->flags |= (head->flags & - ((1L << PG_referenced) | - (1L << PG_swapbacked) | - (1L << PG_swapcache) | - (1L << PG_mlocked) | - (1L << PG_uptodate) | - (1L << PG_active) | - (1L << PG_workingset) | - (1L << PG_locked) | - (1L << PG_unevictable) | +static long page_in_folio_offset(struct page *page, struct folio *folio) +{ + long nr_pages = folio_nr_pages(folio); + unsigned long pages_pfn = page_to_pfn(page); + unsigned long folios_pfn = folio_pfn(folio); + + if (pages_pfn >= folios_pfn && pages_pfn < (folios_pfn + nr_pages)) + return pages_pfn - folios_pfn; + + return -EINVAL; +} + +/* + * It splits @folio into @new_order folios and copies the @folio metadata to + * all the resulting folios. + */ +static int __split_folio_to_order(struct folio *folio, int new_order) +{ + int curr_order = folio_order(folio); + long nr_pages = folio_nr_pages(folio); + long new_nr_pages = 1 << new_order; + long index; + + if (curr_order <= new_order) + return -EINVAL; + + for (index = new_nr_pages; index < nr_pages; index += new_nr_pages) { + struct page *head = &folio->page; + struct page *second_head = head + index; + + /* + * Careful: new_folio is not a "real" folio before we cleared PageTail. + * Don't pass it around before clear_compound_head(). + */ + struct folio *new_folio = (struct folio *)second_head; + + VM_BUG_ON_PAGE(atomic_read(&second_head->_mapcount) != -1, second_head); + + /* + * Clone page flags before unfreezing refcount. + * + * After successful get_page_unless_zero() might follow flags change, + * for example lock_page() which set PG_waiters. + * + * Note that for mapped sub-pages of an anonymous THP, + * PG_anon_exclusive has been cleared in unmap_folio() and is stored in + * the migration entry instead from where remap_page() will restore it. + * We can still have PG_anon_exclusive set on effectively unmapped and + * unreferenced sub-pages of an anonymous THP: we can simply drop + * PG_anon_exclusive (-> PG_mappedtodisk) for these here. + */ + second_head->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; + second_head->flags |= (head->flags & + ((1L << PG_referenced) | + (1L << PG_swapbacked) | + (1L << PG_swapcache) | + (1L << PG_mlocked) | + (1L << PG_uptodate) | + (1L << PG_active) | + (1L << PG_workingset) | + (1L << PG_locked) | + (1L << PG_unevictable) | #ifdef CONFIG_ARCH_USES_PG_ARCH_2 - (1L << PG_arch_2) | + (1L << PG_arch_2) | #endif #ifdef CONFIG_ARCH_USES_PG_ARCH_3 - (1L << PG_arch_3) | + (1L << PG_arch_3) | #endif - (1L << PG_dirty) | - LRU_GEN_MASK | LRU_REFS_MASK)); + (1L << PG_dirty) | + LRU_GEN_MASK | LRU_REFS_MASK)); - /* ->mapping in first and second tail page is replaced by other uses */ - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, - page_tail); - new_folio->mapping = folio->mapping; - new_folio->index = folio->index + tail; + /* ->mapping in first and second tail page is replaced by other uses */ + VM_BUG_ON_PAGE(new_nr_pages > 2 && second_head->mapping != TAIL_MAPPING, + second_head); + second_head->mapping = head->mapping; + second_head->index = head->index + index; - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(page_tail->private)) { - VM_WARN_ON_ONCE_PAGE(true, page_tail); - page_tail->private = 0; - } - if (folio_test_swapcache(folio)) - new_folio->swap.val = folio->swap.val + tail; + /* + * page->private should not be set in tail pages. Fix up and warn once + * if private is unexpectedly set. + */ + if (unlikely(second_head->private)) { + VM_WARN_ON_ONCE_PAGE(true, second_head); + second_head->private = 0; + } + if (folio_test_swapcache(folio)) + new_folio->swap.val = folio->swap.val + index; - /* Page flags must be visible before we make the page non-compound. */ - smp_wmb(); + /* Page flags must be visible before we make the page non-compound. */ + smp_wmb(); - /* - * Clear PageTail before unfreezing page refcount. - * - * After successful get_page_unless_zero() might follow put_page() - * which needs correct compound_head(). - */ - clear_compound_head(page_tail); - if (new_order) { - prep_compound_page(page_tail, new_order); - folio_set_large_rmappable(new_folio); - } + /* + * Clear PageTail before unfreezing page refcount. + * + * After successful get_page_unless_zero() might follow put_page() + * which needs correct compound_head(). + */ + clear_compound_head(second_head); + if (new_order) { + prep_compound_page(second_head, new_order); + folio_set_large_rmappable(new_folio); - /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, - 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? - folio_nr_pages(new_folio) : 0)); + folio_set_order(folio, new_order); + } else { + if (PageHead(head)) + ClearPageCompound(head); + } - if (folio_test_young(folio)) - folio_set_young(new_folio); - if (folio_test_idle(folio)) - folio_set_idle(new_folio); + if (folio_test_young(folio)) + folio_set_young(new_folio); + if (folio_test_idle(folio)) + folio_set_idle(new_folio); - folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); + folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); + } - /* - * always add to the tail because some iterators expect new - * pages to show after the currently processed elements - e.g. - * migrate_pages - */ - lru_add_page_tail(folio, page_tail, lruvec, list); + return 0; } -static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned int new_order) +#define for_each_folio_until_end_safe(iter, iter2, start, end) \ + for (iter = start, iter2 = folio_next(start); \ + iter != end; \ + iter = iter2, iter2 = folio_next(iter2)) + +/* + * It splits a @folio (without mapping) to lower order smaller folios in two + * ways. + * 1. uniform split: the given @folio into multiple @new_order small folios, + * where all small folios have the same order. This is done when + * uniform_split is true. + * 2. buddy allocator like split: the given @folio is split into half and one + * of the half (containing the given page) is split into half until the + * given @page's order becomes @new_order. This is done when uniform_split is + * false. + * + * The high level flow for these two methods are: + * 1. uniform split: a single __split_folio_to_order() is called to split the + * @folio into @new_order, then we traverse all the resulting folios one by + * one in PFN ascending order and perform stats, unfreeze, adding to list, + * and file mapping index operations. + * 2. buddy allocator like split: in general, folio_order - @new_order calls to + * __split_folio_to_order() are called in the for loop to split the @folio + * to one lower order at a time. The resulting small folios are processed + * like what is done during the traversal in 1, except the one containing + * @page, which is split in next for loop. + * + * After splitting, the caller's folio reference will be transferred to the + * folio containing @page. The other folios may be freed if they are not mapped. + * + * In terms of locking, after splitting, + * 1. uniform split leaves @page (or the folio contains it) locked; + * 2. buddy allocator like split leaves @folio locked. + * + * If @list is null, tail pages will be added to LRU list, otherwise, to @list. + */ +static int __folio_split_without_mapping(struct folio *folio, int new_order, + struct page *page, struct list_head *list, pgoff_t end, + struct xa_state *xas, struct address_space *mapping, + bool uniform_split) { - struct folio *folio = page_folio(page); - struct page *head = &folio->page; struct lruvec *lruvec; struct address_space *swap_cache = NULL; - unsigned long offset = 0; - int i, nr_dropped = 0; - unsigned int new_nr = 1 << new_order; + struct folio *origin_folio = folio; + struct folio *next_folio = folio_next(folio); + struct folio *new_folio; + struct folio *next; int order = folio_order(folio); - unsigned int nr = 1 << order; - - /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, new_order); + int split_order = order - 1; + int nr_dropped = 0; if (folio_test_anon(folio) && folio_test_swapcache(folio)) { - offset = swap_cache_index(folio->swap); + if (!uniform_split) + return -EINVAL; + swap_cache = swap_address_space(folio->swap); xa_lock(&swap_cache->i_pages); } + if (folio_test_anon(folio)) + mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1); + /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ lruvec = folio_lruvec_lock(folio); - ClearPageHasHWPoisoned(head); - - for (i = nr - new_nr; i >= new_nr; i -= new_nr) { - struct folio *tail; - __split_huge_page_tail(folio, i, lruvec, list, new_order); - tail = page_folio(head + i); - /* Some pages can be beyond EOF: drop them from page cache */ - if (tail->index >= end) { - if (shmem_mapping(folio->mapping)) - nr_dropped++; - else if (folio_test_clear_dirty(tail)) - folio_account_cleaned(tail, - inode_to_wb(folio->mapping->host)); - __filemap_remove_folio(tail, NULL); - folio_put(tail); - } else if (!folio_test_anon(folio)) { - __xa_store(&folio->mapping->i_pages, tail->index, - tail, 0); - } else if (swap_cache) { - __xa_store(&swap_cache->i_pages, offset + i, - tail, 0); + /* + * split to new_order one order at a time. For uniform split, + * intermediate orders are skipped + */ + for (split_order = order - 1; split_order >= new_order; split_order--) { + int old_order = folio_order(folio); + struct folio *release; + struct folio *end_folio = folio_next(folio); + int status; + + if (folio_test_anon(folio) && split_order == 1) + continue; + if (uniform_split && split_order != new_order) + continue; + + if (mapping) { + /* + * uniform split has xas_split_alloc() called before + * irq is disabled, since xas_nomem() might not be + * able to allocate enough memory. + */ + if (uniform_split) + xas_split(xas, folio, old_order); + else { + xas_set_order(xas, folio->index, split_order); + xas_set_err(xas, -ENOMEM); + if (xas_nomem(xas, 0)) + xas_split(xas, folio, old_order); + else + return -ENOMEM; + } } - } - if (!new_order) - ClearPageCompound(head); - else { - struct folio *new_folio = (struct folio *)head; + split_page_memcg(&folio->page, old_order, split_order); + split_page_owner(&folio->page, old_order, split_order); + pgalloc_tag_split(folio, old_order, split_order); - folio_set_order(new_folio, new_order); - } - unlock_page_lruvec(lruvec); - /* Caller disabled irqs, so they are still disabled here */ + status = __split_folio_to_order(folio, split_order); - split_page_owner(head, order, new_order); - pgalloc_tag_split(folio, order, new_order); + if (status < 0) + return status; - /* See comment in __split_huge_page_tail() */ - if (folio_test_anon(folio)) { - /* Additional pin to swap cache */ - if (folio_test_swapcache(folio)) { - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&swap_cache->i_pages); - } else { - folio_ref_inc(folio); + /* + * Iterate through after-split folios and perform related + * operations. But in buddy allocator like split, the folio + * containing the specified page is skipped until its order + * is new_order, since the folio will be worked on in next + * iteration. + */ + for_each_folio_until_end_safe(release, next, folio, end_folio) { + if (page_in_folio_offset(page, release) >= 0) { + folio = release; + if (split_order != new_order) + continue; + } + if (folio_test_anon(release)) + mod_mthp_stat(folio_order(release), + MTHP_STAT_NR_ANON, 1); + + /* + * Unfreeze refcount first. Additional reference from + * page cache. + */ + folio_ref_unfreeze(release, + 1 + ((!folio_test_anon(origin_folio) || + folio_test_swapcache(origin_folio)) ? + folio_nr_pages(release) : 0)); + + if (release != origin_folio) + lru_add_page_tail(origin_folio, &release->page, + lruvec, list); + + /* Some pages can be beyond EOF: drop them from page cache */ + if (release->index >= end) { + if (shmem_mapping(origin_folio->mapping)) + nr_dropped++; + else if (folio_test_clear_dirty(release)) + folio_account_cleaned(release, + inode_to_wb(origin_folio->mapping->host)); + __filemap_remove_folio(release, NULL); + folio_put(release); + } else if (!folio_test_anon(release)) { + __xa_store(&origin_folio->mapping->i_pages, + release->index, &release->page, 0); + } else if (swap_cache) { + __xa_store(&swap_cache->i_pages, + swap_cache_index(release->swap), + &release->page, 0); + } } - } else { - /* Additional pin to page cache */ - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&folio->mapping->i_pages); } + + unlock_page_lruvec(lruvec); + + if (folio_test_anon(origin_folio)) { + if (folio_test_swapcache(origin_folio)) + xa_unlock(&swap_cache->i_pages); + } else + xa_unlock(&mapping->i_pages); + + /* Caller disabled irqs, so they are still disabled here */ local_irq_enable(); - if (nr_dropped) - shmem_uncharge(folio->mapping->host, nr_dropped); - remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0); + remap_page(origin_folio, 1 << order, + folio_test_anon(origin_folio) ? + RMP_USE_SHARED_ZEROPAGE : 0); /* - * set page to its compound_head when split to non order-0 pages, so - * we can skip unlocking it below, since PG_locked is transferred to - * the compound_head of the page and the caller will unlock it. + * At this point, folio should contain the specified page, so that it + * will be left to the caller to unlock it. */ - if (new_order) - page = compound_head(page); - - for (i = 0; i < nr; i += new_nr) { - struct page *subpage = head + i; - struct folio *new_folio = page_folio(subpage); - if (subpage == page) + for_each_folio_until_end_safe(new_folio, next, origin_folio, next_folio) { + if (uniform_split && new_folio == folio) + continue; + if (!uniform_split && new_folio == origin_folio) continue; - folio_unlock(new_folio); + folio_unlock(new_folio); /* * Subpages may be freed if there wasn't any mapping * like if add_to_swap() is running on a lru page that @@ -3358,81 +3480,18 @@ static void __split_huge_page(struct page *page, struct list_head *list, * requires taking the lru_lock so we do the put_page * of the tail pages after the split is complete. */ - free_page_and_swap_cache(subpage); + free_page_and_swap_cache(&new_folio->page); } + return 0; } -/* Racy check whether the huge page can be split */ -bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) -{ - int extra_pins; - /* Additional pins from page cache */ - if (folio_test_anon(folio)) - extra_pins = folio_test_swapcache(folio) ? - folio_nr_pages(folio) : 0; - else - extra_pins = folio_nr_pages(folio); - if (pextra_pins) - *pextra_pins = extra_pins; - return folio_mapcount(folio) == folio_ref_count(folio) - extra_pins - - caller_pins; -} -/* - * This function splits a large folio into smaller folios of order @new_order. - * @page can point to any page of the large folio to split. The split operation - * does not change the position of @page. - * - * Prerequisites: - * - * 1) The caller must hold a reference on the @page's owning folio, also known - * as the large folio. - * - * 2) The large folio must be locked. - * - * 3) The folio must not be pinned. Any unexpected folio references, including - * GUP pins, will result in the folio not getting split; instead, the caller - * will receive an -EAGAIN. - * - * 4) @new_order > 1, usually. Splitting to order-1 anonymous folios is not - * supported for non-file-backed folios, because folio->_deferred_list, which - * is used by partially mapped folios, is stored in subpage 2, but an order-1 - * folio only has subpages 0 and 1. File-backed order-1 folios are supported, - * since they do not use _deferred_list. - * - * After splitting, the caller's folio reference will be transferred to @page, - * resulting in a raised refcount of @page after this call. The other pages may - * be freed if they are not mapped. - * - * If @list is null, tail pages will be added to LRU list, otherwise, to @list. - * - * Pages in @new_order will inherit the mapping, flags, and so on from the - * huge page. - * - * Returns 0 if the huge page was split successfully. - * - * Returns -EAGAIN if the folio has unexpected reference (e.g., GUP) or if - * the folio was concurrently removed from the page cache. - * - * Returns -EBUSY when trying to split the huge zeropage, if the folio is - * under writeback, if fs-specific folio metadata cannot currently be - * released, or if some unexpected race happened (e.g., anon VMA disappeared, - * truncation). - * - * Callers should ensure that the order respects the address space mapping - * min-order if one is set for non-anonymous folios. - * - * Returns -EINVAL when trying to split to an order that is incompatible - * with the folio. Splitting to order 0 is compatible with all folios. - */ -int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, - unsigned int new_order) +static int __folio_split(struct folio *folio, unsigned int new_order, + struct page *page, struct list_head *list, bool uniform_split) { - struct folio *folio = page_folio(page); struct deferred_split *ds_queue = get_deferred_split_queue(folio); - /* reset xarray order to new order after split */ - XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order); + XA_STATE(xas, &folio->mapping->i_pages, folio->index); bool is_anon = folio_test_anon(folio); struct address_space *mapping = NULL; struct anon_vma *anon_vma = NULL; @@ -3453,9 +3512,10 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, VM_WARN_ONCE(1, "Cannot split to order-1 folio"); return -EINVAL; } - } else if (new_order) { + } else { /* Split shmem folio to non-zero order not supported */ - if (shmem_mapping(folio->mapping)) { + if ((!uniform_split || new_order) && + shmem_mapping(folio->mapping)) { VM_WARN_ONCE(1, "Cannot split shmem folio to non-0 order"); return -EINVAL; @@ -3466,7 +3526,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping * does not actually support large folios properly. */ - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && + if (new_order && IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !mapping_large_folio_support(folio->mapping)) { VM_WARN_ONCE(1, "Cannot split file folio to non-0 order"); @@ -3475,7 +3535,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, } /* Only swapping a whole PMD-mapped folio is supported */ - if (folio_test_swapcache(folio) && new_order) + if (folio_test_swapcache(folio) && (!uniform_split || new_order)) return -EINVAL; is_hzp = is_huge_zero_folio(folio); @@ -3532,10 +3592,13 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, goto out; } - xas_split_alloc(&xas, folio, folio_order(folio), gfp); - if (xas_error(&xas)) { - ret = xas_error(&xas); - goto out; + if (uniform_split) { + xas_set_order(&xas, folio->index, new_order); + xas_split_alloc(&xas, folio, folio_order(folio), gfp); + if (xas_error(&xas)) { + ret = xas_error(&xas); + goto out; + } } anon_vma = NULL; @@ -3600,7 +3663,6 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, if (mapping) { int nr = folio_nr_pages(folio); - xas_split(&xas, folio, folio_order(folio)); if (folio_test_pmd_mappable(folio) && new_order < HPAGE_PMD_ORDER) { if (folio_test_swapbacked(folio)) { @@ -3618,8 +3680,8 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1); mod_mthp_stat(new_order, MTHP_STAT_NR_ANON, 1 << (order - new_order)); } - __split_huge_page(page, list, end, new_order); - ret = 0; + ret = __folio_split_without_mapping(page_folio(page), new_order, + page, list, end, &xas, mapping, uniform_split); } else { spin_unlock(&ds_queue->split_queue_lock); fail: @@ -3645,6 +3707,61 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, return ret; } +/* + * This function splits a large folio into smaller folios of order @new_order. + * @page can point to any page of the large folio to split. The split operation + * does not change the position of @page. + * + * Prerequisites: + * + * 1) The caller must hold a reference on the @page's owning folio, also known + * as the large folio. + * + * 2) The large folio must be locked. + * + * 3) The folio must not be pinned. Any unexpected folio references, including + * GUP pins, will result in the folio not getting split; instead, the caller + * will receive an -EAGAIN. + * + * 4) @new_order > 1, usually. Splitting to order-1 anonymous folios is not + * supported for non-file-backed folios, because folio->_deferred_list, which + * is used by partially mapped folios, is stored in subpage 2, but an order-1 + * folio only has subpages 0 and 1. File-backed order-1 folios are supported, + * since they do not use _deferred_list. + * + * After splitting, the caller's folio reference will be transferred to @page, + * resulting in a raised refcount of @page after this call. The other pages may + * be freed if they are not mapped. + * + * If @list is null, tail pages will be added to LRU list, otherwise, to @list. + * + * Pages in @new_order will inherit the mapping, flags, and so on from the + * huge page. + * + * Returns 0 if the huge page was split successfully. + * + * Returns -EAGAIN if the folio has unexpected reference (e.g., GUP) or if + * the folio was concurrently removed from the page cache. + * + * Returns -EBUSY when trying to split the huge zeropage, if the folio is + * under writeback, if fs-specific folio metadata cannot currently be + * released, or if some unexpected race happened (e.g., anon VMA disappeared, + * truncation). + * + * Callers should ensure that the order respects the address space mapping + * min-order if one is set for non-anonymous folios. + * + * Returns -EINVAL when trying to split to an order that is incompatible + * with the folio. Splitting to order 0 is compatible with all folios. + */ +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) +{ + struct folio *folio = page_folio(page); + + return __folio_split(folio, new_order, page, list, true); +} + int min_order_for_split(struct folio *folio) { if (folio_test_anon(folio)) @@ -3669,6 +3786,29 @@ int split_folio_to_list(struct folio *folio, struct list_head *list) return split_huge_page_to_list_to_order(&folio->page, list, ret); } +/* + * folio_split: split a folio at offset_in_new_order to a new_order folio + * @folio: folio to split + * @new_order: the order of the new folio + * @page: a page within the new folio + * + * return: 0: successful, <0 failed + * + * Split a folio at offset_in_new_order to a new_order folio, leave the + * remaining subpages of the original folio as large as possible. For example, + * split an order-9 folio at its third order-3 subpages to an order-3 folio. + * There are 2^6=64 order-3 subpages in an order-9 folio and the result will be + * a set of folios with different order and the new folio is in bracket: + * [order-4, {order-3}, order-3, order-5, order-6, order-7, order-8]. + * + * After split, folio is left locked for caller. + */ +static int folio_split(struct folio *folio, unsigned int new_order, + struct page *page, struct list_head *list) +{ + return __folio_split(folio, new_order, page, list, false); +} + void __folio_undo_large_rmappable(struct folio *folio) { struct deferred_split *ds_queue; From patchwork Mon Oct 28 18:09:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13853817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 859D7D339AE for ; Mon, 28 Oct 2024 18:10:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 069C86B009A; Mon, 28 Oct 2024 14:10:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0189F6B009C; Mon, 28 Oct 2024 14:10:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DDF686B009D; Mon, 28 Oct 2024 14:10:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B693E6B009A for ; Mon, 28 Oct 2024 14:10:15 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 5DBC7A0FAA for ; Mon, 28 Oct 2024 18:10:15 +0000 (UTC) X-FDA: 82723799214.07.AF1E961 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2089.outbound.protection.outlook.com [40.107.102.89]) by imf12.hostedemail.com (Postfix) with ESMTP id AE84C4002D for ; Mon, 28 Oct 2024 18:10:01 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=BxId04BJ; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf12.hostedemail.com: domain of ziy@nvidia.com designates 40.107.102.89 as permitted sender) smtp.mailfrom=ziy@nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1730138958; a=rsa-sha256; cv=pass; b=2m/vrtabE6gWzP0WKy0DrbV2g47vKzQjqL09rx39RtEbu6Ko3/fn1xv+gCO9A9n17IDiyQ d0QYaAT55XLKSXVAg7baqOkTD5/dPQpb4Q0khUDXlp+GJCqV9dqpYnxRm0E2xdL0mThT/P dN5LHnU2Jlz6pAkSgss3HFHB0Z2fsi4= ARC-Authentication-Results: i=2; imf12.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=BxId04BJ; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf12.hostedemail.com: domain of ziy@nvidia.com designates 40.107.102.89 as permitted sender) smtp.mailfrom=ziy@nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730138958; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0R6/pYkkl5arlhdrpyVf+Fol2LxCndeTYdRd5JsWGm4=; b=DwCe0RRXrJQQGPPPgSypvtLPD//N8MvrVaYXQGoKhTJwPDIpEEI3G32MFAH392hYtiB6k8 1GDMvyYzbRxPW2B4JlRKEnd9liPjGqKLtbWNPwJk58IX8pH7Nhw6XgJ8uJH2VuTXRsyoWk sU1U9drthYparyUbDG3xg7t735yfUmw= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Fp7FUmdMsEZkttOfOuuTAYpndT9T6J5kwHEJ7FR4Net3/1/Gqt28jsr3GexpcBMik/1HWaCkTfhgbZFWY9rtm1wi4HMVHkvJApWh3JPsJKrlKwc8K6XAxAB1M/FiwNwTNLDXSxL69zVWc7skcZkyqUpQm5pguxy4sUrY0wkTAS4cJGI5wVRubwLDd7J8CxNBZNhRX92LhiU2TSfwVUjSTK4HwwUub8YzkxCRI+ekvqKvZZ6Yh4LzubYcSX+GH/bxRHVpGPQCANL/iYm7eRlUcP3Fq8bprhgWz2StRukVccn+ROn48rCHLcqVR4gTAllfuCrWa7BVdRHwWvp8n/sm0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0R6/pYkkl5arlhdrpyVf+Fol2LxCndeTYdRd5JsWGm4=; b=Eh9nVFaiJov3g36UgWH8IMmOTUfQi1sR0hkGL8DjDQuYMDBNEFTZ7YbSoKCyGaRC7SzzmB2BnQNXovPgBlbhYZjm7URi26LOUelpy3ISA1YoBQe9pLSlo+vaoiCGEl29hxQ/IE6oquHRR3qIpZ5xVhiKEJ7+YaISoP37Iwyo88DrNh/s1XUNtOntAWmuD64W9ju211zu5TI2oblYnb1FbvrsVRAZcUVppwRLALgX1eOEmakFdjenJ9c0eT3hcM1475hTgwRaxC6twYtarfweQbNUUGoyhPe4asWpM7IsnYiDjsWK4QTW9Lw8Wkdp7fjHhtnjxpvHvxX3l20Q9327Gw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0R6/pYkkl5arlhdrpyVf+Fol2LxCndeTYdRd5JsWGm4=; b=BxId04BJSWbKlXB4JTANnSbO9Jnhh4jPEzb4c1y24qB31y4PWIgxGmDDLp3/ETOf5jR7kRK7H09Lqte9NBnOAX432bL8dxV8jLYXlz+B9zdu1I36w9GtOaDtsUTRscHo1q2OBqmKoSenLtrZzjSdkRiw9XtBQkmLY9TunVBdpt5S+rlxLtMQFZn0s9DT/aKIaTLndi7JMzO0yYGBHt0oz2OBlgq5jSZCCsS2yNtRNtte5gI1S9LDHAb5UGhi9sYJ7KSKsjrTqWPgj2XfYcbI1RBWYX0U2ts/NqGQ/MkGI8yVa/N5namxN9KE2X5R1sCaxe+h2JUMeQQjtW+inhDBEQ== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by CYYPR12MB8701.namprd12.prod.outlook.com (2603:10b6:930:bf::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.27; Mon, 28 Oct 2024 18:09:58 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%7]) with mapi id 15.20.8093.021; Mon, 28 Oct 2024 18:09:58 +0000 From: Zi Yan To: linux-mm@kvack.org, "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , "Kirill A . Shutemov" , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v1 2/3] mm/huge_memory: add folio_split() to debugfs testing interface. Date: Mon, 28 Oct 2024 14:09:31 -0400 Message-ID: <20241028180932.1319265-3-ziy@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241028180932.1319265-1-ziy@nvidia.com> References: <20241028180932.1319265-1-ziy@nvidia.com> X-ClientProxiedBy: IA1P220CA0004.NAMP220.PROD.OUTLOOK.COM (2603:10b6:208:461::10) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|CYYPR12MB8701:EE_ X-MS-Office365-Filtering-Correlation-Id: 2f48d684-8b3f-4ebb-7000-08dcf77bbbbc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|7416014|366016; X-Microsoft-Antispam-Message-Info: bZoEaxJv0FNq/X/1OeoQP22yEYtuFX3cmwcChyukL9dOZoU0qr4eW7oSGdX4Y/P01xALVPRd20sRc3AVhu3zrMSAbfxzDLUMr0GJ5dqZi+OzFD7f8pWmpAgeQ3Xe9SQQatvw03Uh6ogEEq91uYXWperxyJxLTMZ/ivncyS6AQ3F6kth69VaZ9rHOWV0rIBNOxNsXBW47LW2MANMOVIvfeP3TfyOzo/PL1EevxYIKmsuN6lAVfdS4X69tQ3Vy0APObMcipV/N7yD6jGSmFjRjqTTt+dtOmWg/CE/6p0RQFLvmrQis1oOMxOzV4+d0cL0a9NN+EKRASSrn3kHlSrZm4BljURzDQJj1fpEq35opylfhD7XhE3i+NAq7zSYpUtJRZWDqedbMXBS0khOkd3LDeNftFhlz4C8Tizv61DGfuGynZmVvQIkvSBT7KxuAe1AGz37hXpyjdmHBtnIIuwodA4dHpRCf/BY0Rd4FVtIpsUOXZkUgHglB3xVN4G7/OUBXr6krmZqDE5C8udrdHa1fZOMbpgf024vZbI+7FbDuhcMjrIaKJE0z4Iw2ZAwlmWRbIdKNYJWK0OwOqmheKHjWyf6IfhRbLXjB7CFmAeWsIsolWztmXMKDQIoVG/vedVFaevK3LUr/ROeD2Nh2W9y6qT5HODKT+Ae1GICYksrXZK2B1bNzTyU/sufV7np0yx4mwcaBcdDGs4KY3hvorO7YPht2AV+HdUfQUZEDP3CYLv1ywQ7VXuX/35Xbh4GGAhlGMDXelBMVQxC6C+rqYPGNA8O4rYPYLmkHkMboVAZ2LbSXgWi35UN/GtNUF9tysMXcq9EtsZsEIZi0UqhydkuWqPjIAQp4WlL0+B5iXjKgmCrw3XxMMO0Rc5c5jd2ACIT3AnkrIteIRKlIfkWYKEBV4O69tTSPkimj03ygnnO26Fn8WcAZR+i+vQqGZReMGbHTkv421Uns9zu02jb8xgQ3G+SZH/UbTxNU7eaPT7UWZLUYA2G87CCgfWkdl8qkqiCMxlXkGyosX1PH9/Nako9XUcQkbEQP+ORRhwN6ulUIp3idP35O1sPN7cnEf/alMhq/gpxmJTAQz5d6+zz9KM/hJP7fakj61BJRIdLiOMJmFiwcYxeIUCCw3Gc+HYNOyq1rzfvwkA+zB4+VXbkM0Ie0e1rurEharcrbW2i0VHM1NW94p+i01ubpDsgLzv+4eo15TQbuFcZ/Gn3oa0UH6nWBj/7ZHCmagrFc8pYvJQ/hgZTODTY7RXl8Jo6GCaWWTZE52QdzsmtrqpjjmotFaQ/aslpk71pdBe6jpGy313G3ae+WgLU8M+lynw66WotOvlgr X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(376014)(7416014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Z5Pq/7WUO6mhj8zcfq0rXf5pScZ8ZCXFC3tVCOHlwVht/Ad0LfV3pLiAkuSL4k8+1tvWtLc8rwhxxUVIid6hGBdokOYIWmOroM2+t6xkGOmOBSZ79PrrRDoEGV/hjDAxuTEm7yWYhjOFyPrfeQQ3jl91lD5QHXyDyJKrlkQy706rxQrUIx9Lxtn6L9/8u0uuVB2tGEWPQ29rr0bEDbtBmHJrXZ23zocXuvOgvz8ZvlASr6mmzEn39BhWAk3ShAn27eM1hF+dE7ut2lY7IAlXb/AO1agNy+jIaJcfEvU2ZAr/5y1i62Q+ZMAQNJKtZfm251ToTI8sAmgyl+W/cXrCz8QmmgRL9nf6dwDshxF7i666k+7IdZvciAXq9VzE/ZS+1AUqaVlc9Tug7v67H6BQtzf/E+FOC2TQFlSr4GgR8/qSjvLHubp8GwRaZeEOcL8zMSG7hSO9NfoDx/p/amNTdWPGf3lA1EBqujDVTurUP0pWV5eSGOYrbM9B+J6axUB8THFUDsawJwtIhvAPDmGiO91Wm/UIYMXcEWL6kwXRtsz5HchHb+U/jHrlhJFLrzRRvBIePsHDLJs2K4HSAkEnoApg5Qdiv/0btpV9MHKVPlCQRi4oCQl12X5d4B/ectUeGPW3S41HaP5NYNLCzw0A7wuYF2aY3+C+EohZxbBTDoMvN9sAAAr9CALlQSRLqSS0GZj4ywklRfWc0P+brB76eYm5dz6QYAovkRVJK1jbM2fgc1lUk3cLdfOoXqxvQJlOpcXsLk/zHuudDwftsRWKg3bAbwdYt9TOc/ltU6YmPFAKc6fwvpCdkWHNAN1STiKRUiErmsubtJl2sgXfg+zzMSGvLcOQ+zA9fN5sh7KuLGQU26OUy05beEXHdqRyXGtk9snMkTTeh+hRF3XLcTD+blBidHjigXhF189UkGX0AsY0mFzAUZhOJ6+newpO7pEoTTxqHIspuxHqDCPZ7DKAZXNAlRPFoJq92psPYFshgwXKKlN3oIoTXquej8AbP3ELVuu4MxuFJpPN2DetyZYjqJoLjTPS4Y5Dd7QOziRieobsWCo+ceYN0FQwkrdHc9Y4Iq+IwMYQypQocBCiHDfccIXPQutx0oii1dzF/UZl8kiSPTlNE+tHQeop2AMQacw6xZfw4xZbWJE/MdU/Ushw9L/7lC6EwAXeVd2Q2ZVVNgYpBL8ZfTCoa3x7gxy4wEgouQ4h1HgmILxnPJLmWBeFPNk/dDkEWYewEB6No2A8SfBQmEESyIoOxoA9rgTDd53tbZzk7dW5P1iT7VlxAprqpo4TYkXloeWgHYVztvst1PcTGDAeDSPQEPgjOp1wkV2uL0kmOeevTzE0a2p6lgVKfOXtsQTPpk7OIYmKS5sdHKcv2N1T09qM0fG6hox2udMqwaBKk6W72TJj05TJuXtPG7ojM6BOMnTJzJStqM/Wdt1RFVULv7i2/qVr79uoOd+b923wUtT+ebUnJLb62vtpMn038pdpcE1ckkB2G4rnj23NWp1T8Z9l0JmjjoZXDVGYHNONA38zQ4GYaGqGfYBw+/biqowLK79FTMN+ppNDgYvcUJ3otbgndYwVNDJwhGWE X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2f48d684-8b3f-4ebb-7000-08dcf77bbbbc X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2024 18:09:58.3401 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 1Cmq1RcvaZLYlQIjz6XfL1lkQmNUw6CKQIQuFKNzuEDCJwryMJedH4oOBFSfvXTf X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8701 X-Rspam-User: X-Rspamd-Queue-Id: AE84C4002D X-Rspamd-Server: rspam01 X-Stat-Signature: r4affbo7j81pq43kz5q4yjpq6s7bysd8 X-HE-Tag: 1730139001-633588 X-HE-Meta: U2FsdGVkX19eBv1sP8ziIEiXHEreWK2I2ot3oQXx8u2YqBWgboOoMSHvZT5vN/mEpcVndj9w5gQY65gWqj2Spl+oCWgvpNlDXDlhcmoK16ASqR2WFV/K7LyHlT2ZVR/232v9unoWWNvA2W8d5xa3aa1JfD2Kd6XDU5UuBiN9CU6y6P65R8btQF5sfPT17WHURCOUKdyud3n/PX1/fOeoPHgMffwxAoscmutztauCY2Dwylh3WSrM0Dp5yPVlvChO2PBjUM3s9kcA1zxrSWb3kxYb0UOaZOUVyx7vH71dK48rgSSERWH5a9nOFjmFxenaWOWzLCIOAbXIRxX8joKuB7GXndTf+49HM4s/QUk/cvJFNYem8GAkGbUDCvVfRKmDqdA+aci8rIY7ra9VpaoAzkI6kxXJyIbheiz0qCeXT6GoWVRuACWGFm78tEFrnQhHm6LIIWkNOnNIdFXxkLAC0IJwpFPmMC29aapdkCKubzKs/OTIoPR1pV/OWZ3UNQtgAfT2H06WHlBSM6zcFvorOp1gxjs2OaafJmKrJZjaChmx05DtX2z5/w0v0l15/fgRNcg7Sm9D8IQ4UXCkNs3SV+UmYyXmOhpYERjRS3OKb3OMhr0XYys4R70S1uxVHh4/N8iXgHHvJjbBVkXpEhMmd74AIngCfqt+E9IhzwpJfCo4j5E+uEw80K62+iqryNvW1Na3euBW2tfMgYYaYhbriveC3BSIXdFv45n3qkPrDJZx2Oa5TslcgLPIblR1E/wiS6hxtfL44AbWsL/RgFK9vuNn7mJ87usoDCLiLGM8jJb8H2SU1+JYL1lcDc/bEUhDE+JIc1Oz2OPTMNdJXZ3QxFz9zv2Kc2VkMjqFGFQPoS4Uag2fxuqZeGDIEYcGw6hF2OzP3miI+AySjhbs3VNTMQ6aovn9jK43L/S4vgshGFapqnLmHoxSVMQxYZ8rzuySc06vDk7YkHnavwcJkZe MgU8RqE8 aAhzxzvVARffQIQm41tgaNvDbkEtTg+DRK+3Dpce300P8wgRW3gTNa2rJJg+jJ/PHFRo0VrIFrOIjiCIm3uGdH7lo4A81BDlJcaSBTvVijLw27/8MHKZs2LhMibu6Bf+xFyFYfRJtPoLg4UgIXPR6rVZWjuCj841Rm0U7d//dXlD+X+gMkRvVp3AiQTQe+z1hMpS86vjInlgkVBdAsANWNj7Z/ARxsmM9RpjiJTLJrF95Mwx47UG4W3KZ7/Hh/x59HqB3dWtzvlWZxAeMnpFFHGCKHOTiutwY09JpArMjsaPJ0AbbNVaydfspjKuUgcwpKy6Vm2H5qeV9ixQ6NbOn6FyjdA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This allows to test folio_split() by specifying an additional in folio page offset parameter to split_huge_page debugfs interface. Signed-off-by: Zi Yan --- mm/huge_memory.c | 46 ++++++++++++++++++++++++++++++++++------------ 1 file changed, 34 insertions(+), 12 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0224925e4c3c..4ccd23473e2b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4072,7 +4072,8 @@ static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *vma) } static int split_huge_pages_pid(int pid, unsigned long vaddr_start, - unsigned long vaddr_end, unsigned int new_order) + unsigned long vaddr_end, unsigned int new_order, + long in_folio_offset) { int ret = 0; struct task_struct *task; @@ -4156,8 +4157,16 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, if (!folio_test_anon(folio) && folio->mapping != mapping) goto unlock; - if (!split_folio_to_order(folio, target_order)) - split++; + if (in_folio_offset < 0 || + in_folio_offset >= folio_nr_pages(folio)) { + if (!split_folio_to_order(folio, target_order)) + split++; + } else { + struct page *split_at = folio_page(folio, + in_folio_offset); + if (!folio_split(folio, target_order, split_at, NULL)) + split++; + } unlock: @@ -4180,7 +4189,8 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, } static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, - pgoff_t off_end, unsigned int new_order) + pgoff_t off_end, unsigned int new_order, + long in_folio_offset) { struct filename *file; struct file *candidate; @@ -4229,8 +4239,15 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (folio->mapping != mapping) goto unlock; - if (!split_folio_to_order(folio, target_order)) - split++; + if (in_folio_offset < 0 || in_folio_offset >= nr_pages) { + if (!split_folio_to_order(folio, target_order)) + split++; + } else { + struct page *split_at = folio_page(folio, + in_folio_offset); + if (!folio_split(folio, target_order, split_at, NULL)) + split++; + } unlock: folio_unlock(folio); @@ -4263,6 +4280,7 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, int pid; unsigned long vaddr_start, vaddr_end; unsigned int new_order = 0; + long in_folio_offset = -1; ret = mutex_lock_interruptible(&split_debug_mutex); if (ret) @@ -4291,29 +4309,33 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, goto out; } - ret = sscanf(buf, "0x%lx,0x%lx,%d", &off_start, &off_end, &new_order); - if (ret != 2 && ret != 3) { + ret = sscanf(buf, "0x%lx,0x%lx,%d,%ld", &off_start, &off_end, + &new_order, &in_folio_offset); + if (ret != 2 && ret != 3 && ret != 4) { ret = -EINVAL; goto out; } - ret = split_huge_pages_in_file(file_path, off_start, off_end, new_order); + ret = split_huge_pages_in_file(file_path, off_start, off_end, + new_order, in_folio_offset); if (!ret) ret = input_len; goto out; } - ret = sscanf(input_buf, "%d,0x%lx,0x%lx,%d", &pid, &vaddr_start, &vaddr_end, &new_order); + ret = sscanf(input_buf, "%d,0x%lx,0x%lx,%d,%ld", &pid, &vaddr_start, + &vaddr_end, &new_order, &in_folio_offset); if (ret == 1 && pid == 1) { split_huge_pages_all(); ret = strlen(input_buf); goto out; - } else if (ret != 3 && ret != 4) { + } else if (ret != 3 && ret != 4 && ret != 5) { ret = -EINVAL; goto out; } - ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end, new_order); + ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end, new_order, + in_folio_offset); if (!ret) ret = strlen(input_buf); out: From patchwork Mon Oct 28 18:09:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13853818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A5DFD339AF for ; Mon, 28 Oct 2024 18:11:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96AF76B009D; Mon, 28 Oct 2024 14:11:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 91AC96B009E; Mon, 28 Oct 2024 14:11:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 795016B009F; Mon, 28 Oct 2024 14:11:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 58E5E6B009D for ; Mon, 28 Oct 2024 14:11:30 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F3C441C63AF for ; Mon, 28 Oct 2024 18:11:29 +0000 (UTC) X-FDA: 82723802784.26.679FA76 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2054.outbound.protection.outlook.com [40.107.101.54]) by imf03.hostedemail.com (Postfix) with ESMTP id 30B8A20024 for ; Mon, 28 Oct 2024 18:11:16 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=UrKjU3ig; spf=pass (imf03.hostedemail.com: domain of ziy@nvidia.com designates 40.107.101.54 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1730139007; a=rsa-sha256; cv=pass; b=T2p7JydLb7kmZCGuAuLHJoqwRKWZsKCSQDCM1ZNNqeIE4wLqfMlfubemCseuRPMUrqoxZo tUCT4kuP2QEPlxV2hJ5R3P+N+Wmf6S19QxzNVC6Xuc2t0HRwugsaHegjdnSuulC3XZ2ziO uECAWL/xyg2bu6i6hTjbtCXsLmaogh4= ARC-Authentication-Results: i=2; imf03.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=UrKjU3ig; spf=pass (imf03.hostedemail.com: domain of ziy@nvidia.com designates 40.107.101.54 as permitted sender) smtp.mailfrom=ziy@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730139007; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=44WV/xweiC8aLwwZu1JatIBrE2jU2L2imh5jXosdR0E=; b=DYAHnbo+v0XRLR1z7HecGTSo1xsuXQleHAhZ7yikPY9qtez0djwmpw40LDwpH7EVbamx+X 8GXzcslSy0YjOwJRAYWkYLk27rAEdr+q7FrUCrL5FAHiMsxItacsfBi1lW/OSOh1xAAptX s3VDU3a8qG+d5fOp18L+2FKpPmM1Nqk= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=DraQyIbp4VfaQChjd6a3QhCVl19RqCSf9DiOTtJ8R5vJNIhiKUZjtzZaveFfD6mvNnmfbUdMc4ei4rUqwJit4vfpVDIUptUyKi6nSNZG+cynacR3ipT1LLLp767ORv/8dEaGI0zv9S5Au0v4hXsYslE0yfKznALncq5F98A9B6KR3TUEOK9mWkBfQ1FylkiWU6sOGKeN+0FtaGVH7RAfJWeRKIotJSk/8agBZWawIhRWHid5W/D97JPD1mO5gRO9SyZi7+4iWOSNSjPn8OL0rcFO+dXVwtJxYJcvYVkcTXuNodqYWHcHO3KMBixLjhVy0v8XTsJRMv4O6VGvgsqGmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=44WV/xweiC8aLwwZu1JatIBrE2jU2L2imh5jXosdR0E=; b=QAZXnYJEhs/uhP8zC/Dab4WcMrxS35Ex/W/omgcaqOTHZeLgbTcyV+1o7sjRIiXZsaB4q0lA3I+MXvNRJpu7LMdrBgxd+cPEr2o8u1EGzhaoyocyeoRZuhMO4ee3YHzK0PpyTGgifjQYJpBqYNPt2OvP6i6K0o4Xf1B6zdcbXvcB//uKWE+8bqU5fgh7d2LG+E7xiG4g/tNB3UXbKqTjGp7b0C2NY+gycGraCOwMGqgEkz4uMfnyz74E9GFafL5vNurVStcQCejbfTs2EWZXxJUNYnSOSQsIywoyhQRKYgzIFNc0dLUdFPdBTRXEM3ygc3bkLwosQIOKLpxYgRdHLw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=44WV/xweiC8aLwwZu1JatIBrE2jU2L2imh5jXosdR0E=; b=UrKjU3igim7wVVopbHW8+etym0z5/G++iW6TDh98Ktl6RW2fIgYY2HkB3oOIhQgFy80ujNM9CAXGb+MxNXGo5ImX1onyy2p17vJDfoveXyDHd8qlAp1wEav/ac7VJWWpBSD9NWbi6fLDkgVc27SRgBHWCGjRXKlmAr8oej9NSPzpF/76rGEV8b6zVdW22T3275aUb/uIGuGow0niG/k4uRvzKR8B5ps4iG6MN70rp6ev6OznqEwzlWH8OgTZJ2NVflMHeg4zYle9Pd1kNxPbp8kBZklugEUiylBfsgrSP8IbZJA6eB3ihDLis4guyHf+wiVyMnSWZuUbWeyNkMITKA== Received: from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by CYYPR12MB8701.namprd12.prod.outlook.com (2603:10b6:930:bf::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.27; Mon, 28 Oct 2024 18:10:03 +0000 Received: from DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com ([fe80::5189:ecec:d84a:133a%7]) with mapi id 15.20.8093.021; Mon, 28 Oct 2024 18:10:03 +0000 From: Zi Yan To: linux-mm@kvack.org, "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , "Kirill A . Shutemov" , David Hildenbrand , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , linux-kernel@vger.kernel.org, Zi Yan Subject: [PATCH v1 3/3] mm/truncate: use folio_split() for truncate operation. Date: Mon, 28 Oct 2024 14:09:32 -0400 Message-ID: <20241028180932.1319265-4-ziy@nvidia.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241028180932.1319265-1-ziy@nvidia.com> References: <20241028180932.1319265-1-ziy@nvidia.com> X-ClientProxiedBy: IA1P220CA0006.NAMP220.PROD.OUTLOOK.COM (2603:10b6:208:461::11) To DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS7PR12MB9473:EE_|CYYPR12MB8701:EE_ X-MS-Office365-Filtering-Correlation-Id: bccd475a-7f0b-465e-ea6b-08dcf77bbcc0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|7416014|366016; X-Microsoft-Antispam-Message-Info: XtvgvwKZPv0BafahMUjbFYxPL+Z4mMGSZvrc3kq7zKHRBS9sXyLfpZV8vhn7+cwsA5ExdmrTwsJ+NV+k8EpWd2lG0mBJap6BuNqsUuk/HRtvste+VC9LBR3fPGwSPE6HwszZxBmhcgjL7z7Lpju4c+EgavqauT7yAM553Fgn21M8zaJCyYQ+51vKNXN6jGiWQn1CL8VVaBNLNJOjPnbywDtm138GPdiRe/elYcCP30ZT8/S1RWf8h4/4BaWOCaISvXBjS5dXNkICTdPLM42ml8BMBo1xvyzqiAaFAayVu54Lt7/eSweCYxs8w0n04i0WVkypxsXsEYD46AE81SeLa3ZnlyvySyEe+vUWwHos7OpTmRS0eGvjzV7LxV277g+H6mxYCmnK4NbNF2aZy+Y3t+4VZomja6jFCVnmSJ/TyDM5UqlzrjeQ2uHWtZWBhaHyX/2YuFzexVTAbeDtEEQ2TZoSJF7tB43qZku2MVhoRJbZUdG2qJIv8pa52F/lgtM36YWyuvnfqPsSGmyisJNIyRnLOmUftUGJmZ6iLS48+iapEtjhFLvqScFW/1J67PYM/9gXzbdAPqiU0Ks2to7bzSD3GtF7ON4Xq4gsZAG6md3t+w/REESJE/DP5+KxlT/+UpW68qXh7izMB5Z19gJ2Tp4zxdaVNVbph+L7Mscs07V9SbojMQ28D2Srr8DtXpJKZZnRgF8YlagTjA54+63SWjiz/owXzkKQYwJZq/HHPo4b1Midxpz7nDSV5UBzojNd0fJrgWfMeumOfI8s6oQG6zDPpmla5apOMaYPYPepkLyOJ16D3F1VNZy+ZxYzwEkSQ0nOZzazDWHBc33fz+OBg7wDvjXv+dPLssSQo2t5O8utcMoYvU1MMUnchuNIbnXmj2ZVXHUQy9YXuauDVR4UxcoJxz6gXB4zGYs0QIQp842u+DDDxyQC/rrhCvCJwUiEkXFXc9URjsMTV4kocPb6HSOulv8hm/a4kRE0C/sT+3d1qKI4QBcskHq+tLMzsTlc3Igz3ch1gDhshnLKioie2YvPlLtTNrsw745SeQj3BbRJ715DGFKtfMsCTdspLNGVLXPdtcbTVaGvZZS8C6Wt/9Kc4DAXLgH/Oo8hFhHNfT6lVsJUo3I5qWFRyZ+X2yZhG7dyx5I1vXiyFz/sHkWlKb7CS7+f9L8DYQaSGmXTuv69OvDSlgnIEYtcUqsHH3MJu6suCohxHOCiVAp8mglvyQPHj8N2+FD3MjRKd0lgMRF8TRaB3VN/roqs9IveFTrqcJFAZzt9YXciXFcSSy69cyuRAQZdB1vHCuFgT1TfH5A= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(376014)(7416014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: x6apZXhoITnycejoxt+iJvX45Oe3jwAxmGvHYEzXLzxWNwIVkIZMdIb09zJAv+DbNPFvPDQ3/VsIOQPElog6Xebkk7uPK/8+6bZOFMvahYgQLZZBwmGhQuZP4HAKHsG3rcb31Bgk7TrEDxy+V16x1eXWlyBTXs6xsG7mvdNwMoGHi2pKmjDVOc2EzXKFfZd1jmFCA+lg1Ri5CbD84bCSX04hr57GUUv9JGSSCwS6clhcn9M8dBvs//fjhigxYSwGyBcmaAOEmrTUULpUw9t8ykruuFB82EqmBcqOnmVAemSbrKUbWBi/E1IB2Qnfuz0Eo8QnAqCE7GIRsT8a2aJzx7YG/jAXR5ZiIEbBkOgUtnMa3BXLRs3TAmpY1Yn1jv2YsrCQEcDei2XaREnEJYf/VSUme5ZaqCOHjqYfnhlLBVKUNomx8or874aXcJxgHTu+rBiqVOgfpxP8nF7VSukJJtgCXHdDYXdZqyPVPg43jjDaesm1TeG8Hu0TZgPRPSuDaJfEa/6C+lU6ohPof4M5uSFH/1xZPGbyfae5cXFI/kDQfiUu+pFEkRMF1ifvr8pxk3cdVG72WVoefb+S3fFImD77ZcKlxSo8+NgulMpGp2kiCzus4+UXxMAH+uJrBq5UP14OgkrcByH8taOhL9cDbhZvqPNrMq+AE1HgNeUT+qPp1F6YIewtfJX55IcNIHqZJksO6R14f0TcX/g/KxoaXr/gdeTGVzFeBNsYI7qUyERP/DuFP2LfYCvukFnWAcRdgB7HOapnMI5eOQnFLOIreE+KPwqud0PmbFC0fe+KWy0lJ/nf2WhdRrp9Da2nwv3EXGaEQYRa99gOWgT0tbvdaUejzT1o4YBTGULKIWfQrnVqduXeD921K69JBK9t1jHwZgZ3Vl4PZP2223xN/kCRS2uvMD0FwP9oThV3KNEnNIwzC3KBwb4Rl/bFhr8ebKfnkEEItNbVajGvuKtRKzyeaCCJHL2D/aeq6JEwVNJXKssxcMVTXHbVOqubXUyeIvHogOpXaK0VmppL2VwtZ3Q3IZzRifTwz3A852e5nx2npKF7S37PhiC9c2q9jCMu1NNQcj37SSlZUIPfzS88yjxnLoAvYw28W17sCzxijCb9AGwYhzS6tnNWTK00IwQJ+SoPSJhMsl0vuV1wpZgT+6dcE3j3A6my02dZrbK8Wn+TMGk5lvtPUFFg0dejEDkCqOQzv7IRp7R0JlUMuhBDilRt5GA1ZUFVHCSdIk0PYJhXCsv4sY4boBn4Hz3rGDEFgsewuA8hiQq7r4vmN3fPsbVhP8owM6u+i2EqYfsATEQx2Y2ZGfU8Evz1d9Vmk+CAxnpKghF9L3dEJCJnXq+KW5EExTfD6WBP4zv59R7uX9v3ci49iik28b4opuIRjKtpDFtD0ar0sjzdrTwBsdqK7EeImJb4itj672smtXWRf2RTDDq+3kizna+CmQnlRUJI9hqMPkG0m6RQcRPKSv7otUuPjA6P+ua5Kv4rIZ7Y5hXePwbKFP6AKbhaU0lH5kP4paKrLmQNWc6jPDcAfHXFlK9pBhINVQrTCbfsok6XiCreWU4lyBDpCls+kWtBqjDudY0m X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: bccd475a-7f0b-465e-ea6b-08dcf77bbcc0 X-MS-Exchange-CrossTenant-AuthSource: DS7PR12MB9473.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2024 18:10:00.1139 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: AYhXKAw8KyN+2vJFw7IPWUMBDhun2nM009PBW7V4WIOKa0wBXPDzSd82RzLt7lgE X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8701 X-Stat-Signature: 8jk6xuftki6mccrwms583myd7rt98w9s X-Rspamd-Queue-Id: 30B8A20024 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1730139076-453773 X-HE-Meta: U2FsdGVkX19pYkHHV8mlcQPrJVgX+my1u+qPOL5CAzr0Iob2WHEruXctyP7YOpiLVwVkQQq9TFCiinWxyn2D7IxN/umYmwVJ7cUE9LLpqVtV3qy6yEcl4tWLd4EDWubAV742v9BzFN6hYD8c1UN6A+bFVJaQX7UnHqlTpt81hNlzm58OroquFRC5XulZLX1FB64gt46RXfk0X4cLEPDJwEpQk5dh4H4+NIwu7GuAsLfVdSLWfCGuvm35rIhuMDo8nXjGSlH133On38HLLZwCcspXo4GU1nT7ZH8pYeJeEQvH29P7QNncUe2PrpRCw7KW8BAyrnhhypVQW4oEJjIlRuLtvjPPV5r5GUlCma7zaGPQBXuCkyDkyb/HlDvLu61LuR6at1eINxxiDb6WhiSID0FXW3meE5fhZD5DRfjLgId4UyKwJvx1gDkjwYAvWTsPVrVUN2J7onVJJmA63rjWII4+bJelttxU3NH7bHT8UPa540WOa/OJHTbZ0EgYfS+CvPNyPISInxbN5pNCg7mklqqBt2q+21MOrUalQXUVEXCDEoCnlw390DkMPetXTbVs8DaycS32GnM3a/NGDIcj+99KvdX6TvlSnW1MQZGADeWuJ3WXgPRQNVWALhtYEY8M5LyzVy4/u0JSvb3s4mPlwrS65KAw7fjLbzRdmT+n9qDu25xHyBg96ftzojpC7mTqO+efRShs686LA5AJqgdtZTa1uda8PFghZtqV4VQiOFy4sdG75T0sTsrm18xUaSYLTzRPymXl/pHi6e52iGht+6RKjugjfWtX+4T8mMYHYYtYcqXZxX9qcJkE/4saHuji3qk0MPuzD4E6d98uxLyS444ymX0nFu9O/CbGlodf2RcqMorm0kQftutaaSZfJwx2hTiakrEtuQT0MTPIVGTwUR3BmsxSLdUgpj7IPsbwIgNMiNTgJvH914uT/kQPn3ZzEdTuc5CxXphVlya3FBz yqDCKQJD URxwfmaV4frcKLGX6/39cBHMxRVC+uoBvfR9DREJGzm8RFigMHSXZME5BTlFUIb0NndM7XlYrL6Us2WEzyRCDQNXYHUjkgRlpHjsTbOv+fQe3jHYtdlrnYILhCOJY5MlpWUcRyL83Wx3TQFjJwPA82C7sjBnWnYS2y31XGJJSn2W13pM8mWlibHKej4guJcr304DJs9q/VaWirayt4lN4vUHVcDJ6eZbmIgJmhWTGnvtHlKqEbCCMskHYwpVRkeke2KIwAvCD+szufqJtbF32Z06tZfIPkGejRBfDVVmMwSgR3a1M2YwQewf23Ie1o2IaX1ZtMx1S1GyDt/C+nJ+w0X3+ZA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Instead of splitting the large folio uniformly during truncation, use buddy allocator like split at the start of truncation range to minimize the number of resulting folios. For example, to truncate a order-4 folio [0, 1, 2, 3, 4, 5, ..., 15] between [3, 10] (inclusive), folio_split() splits the folio to [0,1], [2], [3], [4..7], [8..15] and [3], [4..7] can be dropped and [8..15] is kept with zeros in [8..10]. It is possible to further do a folio_split() at 10, so more resulting folios can be dropped. But it is left as future possible optimization if needed. Another possible optimization is to make folio_split() to split a folio based on a given range, like [3..10] above. But that complicates folio_split(), so it will investigated when necessary. Signed-off-by: Zi Yan --- include/linux/huge_mm.h | 12 ++++++++++++ mm/huge_memory.c | 2 +- mm/truncate.c | 5 ++++- 3 files changed, 17 insertions(+), 2 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index b94c2e8ee918..8048500e7bc2 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -339,6 +339,18 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, unsigned int new_order); int min_order_for_split(struct folio *folio); int split_folio_to_list(struct folio *folio, struct list_head *list); +int folio_split(struct folio *folio, unsigned int new_order, struct page *page, + struct list_head *list); +static inline int split_folio_at(struct folio *folio, struct page *page, + struct list_head *list) +{ + int ret = min_order_for_split(folio); + + if (ret < 0) + return ret; + + return folio_split(folio, ret, page, list); +} static inline int split_huge_page(struct page *page) { struct folio *folio = page_folio(page); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4ccd23473e2b..a688b73fa793 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3803,7 +3803,7 @@ int split_folio_to_list(struct folio *folio, struct list_head *list) * * After split, folio is left locked for caller. */ -static int folio_split(struct folio *folio, unsigned int new_order, +int folio_split(struct folio *folio, unsigned int new_order, struct page *page, struct list_head *list) { return __folio_split(folio, new_order, page, list, false); diff --git a/mm/truncate.c b/mm/truncate.c index e5151703ba04..dbd81c21b460 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -179,6 +179,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) { loff_t pos = folio_pos(folio); unsigned int offset, length; + long in_folio_offset; if (pos < start) offset = start - pos; @@ -208,7 +209,9 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) folio_invalidate(folio, offset, length); if (!folio_test_large(folio)) return true; - if (split_folio(folio) == 0) + + in_folio_offset = PAGE_ALIGN_DOWN(offset) / PAGE_SIZE; + if (split_folio_at(folio, folio_page(folio, in_folio_offset), NULL) == 0) return true; if (folio_test_dirty(folio)) return false;