From patchwork Fri Oct 6 03:20:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Kravetz X-Patchwork-Id: 13410936 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4187E92FDD for ; Fri, 6 Oct 2023 03:21:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1BB6494000E; Thu, 5 Oct 2023 23:21:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F55394000C; Thu, 5 Oct 2023 23:21:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEE0B94000D; Thu, 5 Oct 2023 23:21:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C327694000C for ; Thu, 5 Oct 2023 23:21:12 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9F5A61C9C5E for ; Fri, 6 Oct 2023 03:21:12 +0000 (UTC) X-FDA: 81313585584.26.5DB5085 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by imf25.hostedemail.com (Postfix) with ESMTP id 3000FA0007 for ; Fri, 6 Oct 2023 03:21:08 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2023-03-30 header.b=Dp8hy4H0; dkim=pass header.d=oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=Kr2Bd2iX; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf25.hostedemail.com: domain of mike.kravetz@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696562469; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=0PD4vlgJAdKErK+1u+V7qL2tgRPWKs4djduFTe1PTBY=; b=5+saydjgKaaxKWUUMcJhzGUdBJlrdm+sxs9VPCrn2EsUcHq7qqgdhzoudmkGyjWvhLx1gt Om7uwuCESP/2Ze9FWP1iFje2Kf0j5X7Bfv2wjfuJrLwxUXLPy70+WwEooGYDnvrey1Es3D ynRdMskOETEvKN2EB7QCwN+KTuw7eQs= ARC-Authentication-Results: i=2; imf25.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2023-03-30 header.b=Dp8hy4H0; dkim=pass header.d=oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=Kr2Bd2iX; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf25.hostedemail.com: domain of mike.kravetz@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1696562469; a=rsa-sha256; cv=pass; b=6YgmfnLyXuHX607+9yYl1wRO4RqLmY+7vxINIrHPT8IRBvPrto4+veoDOZLlQKXV742O0d boT9Hn6aCT8uAqMIBNRkB6C8/+e9nJrmXV45Qz0Ui1XNNx9U8ZEXkwNjOioeZOm+gvri0K xqFat9zZN6k5qPhMBEat1yHjKdsckJ4= Received: from pps.filterd (m0333520.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3962NnXV013405; Fri, 6 Oct 2023 03:20:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : content-type : content-transfer-encoding : mime-version; s=corp-2023-03-30; bh=0PD4vlgJAdKErK+1u+V7qL2tgRPWKs4djduFTe1PTBY=; b=Dp8hy4H0Uxc/AsSfNzfVZ/LHVoNMjnnbBHN3WPeOvO/iEwzr2LzNhNpl04Y2P9FDOtl9 j8bvXj3PL/5eqG8JhiISSQQa2qm5PXsBBiNFfbRmtC90bkDyVYt1kiqsKae8uErHHKN9 iGiq4OjJpH9b9X9FeiNrILbsGhrxlS2r6kBZUAfPjEu/VG46Cj9wXcVWzvcdnzZlLswd KOi8KcGs4G2QlqkmHohaCwBpj7ed55AORebuLa6MNOQUydD3U5ay/p4dmYOtGuOUeYcY dOuEvMm+eDkBEbaWNF3wWEMny+OTJOW0ZNHiuH9yh4VblhiOqm0tpM7IKcI38I2RqX9o JQ== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3tebqe31sb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Oct 2023 03:20:22 +0000 Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 3960jbRI002843; Fri, 6 Oct 2023 03:20:22 GMT Received: from nam12-bn8-obe.outbound.protection.outlook.com (mail-bn8nam12lp2169.outbound.protection.outlook.com [104.47.55.169]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3tea49ypm6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 06 Oct 2023 03:20:22 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=S9iHaSxvnNQvuVcTGaoaeDPnfQiJ+h3wH2aknVu0H0wxJaBmXtH1gwQ7AQfhzZLJiLjsMXNXzMjhJfQorllaeiMGPU8MaN2akQYHcy1KRlw0NegFoJdWHKypHlIVCxAK7VZuRDv+jPexR8sjz5YVfqUCv+KSPvdFgUq/3jHjxU38WkCL41N+7E8ID5NL7i1l2hZhzHMnAf+LYilzfojheVL0VdUl4tnvkuuJAUhrz49ng/W1rXj9Gn/kDcyZNPr/mzrdME2OfU4y6y8O3gCs3BYwCXr4QWeB3PegU9QnOP31OJNr/8JaqIAsjvYfCmBeDIEF2+wed/QST4HfcxgRoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0PD4vlgJAdKErK+1u+V7qL2tgRPWKs4djduFTe1PTBY=; b=eF/jzjqh3tsru4z/Lw8OQRtpV4pAIcgPxE6onkngAM5J6DAfKetAz4L95jEMu/ZLCZ33KU9VnKdSLUguJ1gEobI05EBxM1JGnRR9H7vMRYCcNBSsGypsrooMxXtg2iYNERQTWQ5/t/y6PHgPlOqdUPakMtOGszCeQ43u2j6o0/ABjuf9L6biQV65nxFyoHZYIgv977ta1R1spRRDBoFw1VJ8R3ef7zXRZJ+hImLFh59oCq2ogs1BgKTSIfBs3GjOhghgfmyCr56lnBSXRSAWcFItLKyr8dFaeIZFmssI/m+GKxpIAOXIcszwPt3FNBBxzp1LMI38injQu2+dfcXLzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0PD4vlgJAdKErK+1u+V7qL2tgRPWKs4djduFTe1PTBY=; b=Kr2Bd2iXCmOLgPlTUncNZp0rwcXsAwMHJWLHj1/5nvt3dyHSxQMWwx7P2uiT9rl8qOEJzzThbJemY0fTGPyrrjfCKNmvMbco8VES8VUYEPHMmIdipymtShuT0eJQvMoBgULAJSdhkArSpIifOqM0TfBO3I2p8FrWeWHHz2GpkHw= Received: from BY5PR10MB4196.namprd10.prod.outlook.com (2603:10b6:a03:20d::23) by SA1PR10MB6615.namprd10.prod.outlook.com (2603:10b6:806:2b8::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Fri, 6 Oct 2023 03:20:19 +0000 Received: from BY5PR10MB4196.namprd10.prod.outlook.com ([fe80::59f7:ec45:eb41:d8c2]) by BY5PR10MB4196.namprd10.prod.outlook.com ([fe80::59f7:ec45:eb41:d8c2%6]) with mapi id 15.20.6838.033; Fri, 6 Oct 2023 03:20:19 +0000 From: Mike Kravetz To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Muchun Song , Joao Martins , Konrad Dybcio , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Barry Song <21cnbao@gmail.com>, Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton , Mike Kravetz Subject: [PATCH v7 0/8] Batch hugetlb vmemmap modification operations Date: Thu, 5 Oct 2023 20:20:02 -0700 Message-ID: <20231006032012.296473-1-mike.kravetz@oracle.com> X-Mailer: git-send-email 2.41.0 X-ClientProxiedBy: MW4PR03CA0091.namprd03.prod.outlook.com (2603:10b6:303:b7::6) To BY5PR10MB4196.namprd10.prod.outlook.com (2603:10b6:a03:20d::23) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BY5PR10MB4196:EE_|SA1PR10MB6615:EE_ X-MS-Office365-Filtering-Correlation-Id: afc09440-6ded-494c-74f2-08dbc61b2b14 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NEaPOfZz7bKAk6NN2XxPhcrWswco+ThfsUjFhWKoP0oRylHbqEzQ2DzlOoYizKvddd6EsnbBLbBJ/7Q09Tqgq0SrjlNdonhn4tprd7BHvU3/mujX5MzZXDxAExiLYz8PUqjfXIP+qkXOgLAXlekCELSqpFAKWW0RLLc1h3BB8Qdxh2qqMt9uAZZ3q8n5Fb2j8PJ2vNaT4n9zMdtZAf8Ngb1uazvdRkV7Nh/7dFa93ayBPTi0Cc6QDjd/4+K2IJNO17+d9tMtdo05ZXFNBOyRm9vBrPULRKZZBman48y1wHUizGkLO3v+fFeq7auC6KssHcxp46JF7XDTBL1UK45hcXnoA1LTTLslobyboHXm3rVTfknn+FMJ9F9cjwm41Y9U6g8Y02Uv4LwUeVyokBLKyIBgrNkOrXVBezSkc+IDO9CXlcyu51jf9/CNMVhRITmnwPG0GwqAEQGkkznFBS+e42vqe86Fl678scpZsQOzfV3uKcV2QBCg2bkePas1M0FTvdNmXLyW3tIzt4hr2KDNLMEzSxlsHT/ks0oolHm5RSkVXGIgP+nH9FxNTe0Ksw6n X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BY5PR10MB4196.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(396003)(366004)(376002)(39860400002)(346002)(136003)(230922051799003)(64100799003)(186009)(451199024)(1800799009)(316002)(54906003)(66946007)(66476007)(66556008)(66899024)(41300700001)(1076003)(2616005)(86362001)(6512007)(107886003)(6506007)(26005)(6666004)(36756003)(6486002)(478600001)(38100700002)(83380400001)(44832011)(7416002)(2906002)(4326008)(8936002)(8676002)(5660300002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?q?pDW2hvayo35Qu5SbzjUOUWfEg0Im?= =?utf-8?q?DI4KtXD9WXWbHJiS8UiTou+XG+hQU/kX3KavbjMpiZVlQk1feZEn5Bkd2a7cSUCdA?= =?utf-8?q?NaGhFI7zO9wparhl1TKtHUZnDugOylx9/vJX1JoY50auweNwaCl5ZoFaIoGY1RUj2?= =?utf-8?q?Xlt1zxifvwwJwzS8uQgYghT7w11D/lvXEiH/H67jCvge7kUUXP5oL5q49HdHChyE/?= =?utf-8?q?JHY5Av/dc1ysBQYufslHz2tHPewRI+7fEvZazTZghOh48mcu3D3xuu4cwQWVa8JPJ?= =?utf-8?q?vYksgU7H0+Cny+3eeufN0WRs+3PCJwud4/HfKUyfHg3IxKbL1N/Xo6YUhNylsWuAa?= =?utf-8?q?a0xV9c9n1SwyfxvDUTnrdM+lC1d7Ab02H2Wd+DdhNjNkmljwt1vXQt6Vmyyf1MXAA?= =?utf-8?q?OoL2ezvx2u/dXBHLGbvayrwZfugmUdchgaKKhk/es9iG6EgggLRMSrEJiRgZ6R1i/?= =?utf-8?q?y4bhfiIem6QI6SeYF/bdhXd7r5dLG4968HT1HqLSECceyt0DAfo3upfvS+Qhs+ZPJ?= =?utf-8?q?BK5QoRXQzIZv92cBK/eUl2xyevNUtU8pPd68HSMqDtwZUqHooq1x3PU7qqctLcgvm?= =?utf-8?q?OHaa8iXrMcZL9b69A18M7SVmGp4Akc4RaN2vV1JDixcy+j13BJiIbs/NVN4aXylh/?= =?utf-8?q?dOeSRfmqF1uP1W/tmIX2oHgflVJHX8kbFeGzOZLuQjny/AeOeVC3pET7xPbRoz4WY?= =?utf-8?q?/duHxPGjQVoTdfgMukes54wOKYLTTYS1HutyFsMNDGbKzmFUPIVeHpmC+/dPak4/h?= =?utf-8?q?MQvpj8KNZtYkNxv9pe+PzhLit1N/Ors8FvDwH1EvzNEdv+QCLWBUIoA6jFYo8rtcC?= =?utf-8?q?mvBbsqeLCJg1Cute1DKzC1AYdUhqTI2akzseaRlQg5I/EUKgulFtt0agkZbg4Fz3K?= =?utf-8?q?pgU1B+aTMiTPKg8OTMGmTmVvrPfcn9l3EvVWR8iJkO06Jlv5oqmpsDgTjFurHWstL?= =?utf-8?q?OeRoNad36FUEZTsdsTwZoHUTNWipisLjbritmyTEWhfrhD37oioEm5kQhDx8PxSZq?= =?utf-8?q?RFKFE7R+9fDJ8ngC5FSBs+AcxudfIeGibxDq0zjW+GD387grhffKcpHOyHTRfg/Nt?= =?utf-8?q?QVQuxgSR4YIqXSvwmZ2k43IS1C+jR7woCeoOMBwyjaAEuLrglBb7/oTCIAPPwKhKP?= =?utf-8?q?0S+4gLDHVFv7Z7U9bCaMe0F3HhHZuRha4uVtXk0TIY2gYuHqW+Rqq9E50OFc6h+Vs?= =?utf-8?q?55msMjYYdFuAb9IgexOPwf3qWMZY6lmIaYFjFS5gNexrusZQ5txInFFDzdIw/hXQp?= =?utf-8?q?bMHprfl0sb6rBB3fY2mX52N80BusgHnGPXn/PLa4mLicbKyskAjCHQRO4AS3lIIIf?= =?utf-8?q?DIfNezCpl0TXM4z9Y6XUWMQZ05YoJ7zsw8ND+sMiNL2vsMRkajhXJqSttLM+OicMT?= =?utf-8?q?CF/Um+uGMYGQcYbr7/CWFYWksyjnHGmDqG/0xNuLTARpzsttBQcB0GCgGZL6LVtGm?= =?utf-8?q?6b2zspqd0aZzh7x1St5TfqTELADvhgSPqqsj85FFkEPJ9x/Kdm44UH8Y0MvgwLttF?= =?utf-8?q?dwNSKxdI0A22?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: =?utf-8?q?jRmvcYvuB+HERfFu?= =?utf-8?q?aRHsr+sedE7s/kwBjQMQxwt5jSUMQNkCOUPbPW7YlhdCZ61UPIOl87yeuG/TBMrK5?= =?utf-8?q?3njUwnC2aCBf6Yj+aG0gCPpVjPwaUzDtl7YCCAo0t/aus7/QjSUg48iLs/W1zF2P6?= =?utf-8?q?ZHGwFnHXTFHn11lorBEQJrPtOWk4fqaaUXftMwAXoWhEcvj7K3mPcOQUi7w170yb4?= =?utf-8?q?rkUFzDZDtS+ubStCLIlKHrRJdcyieZ95O2RAh6lQ97exWSFjELKTZYnJynxnYBuQ8?= =?utf-8?q?JgIfs8xnTDSh8pWSY4lCLoCKqPX7/TDe6mIPcLavBjThAD/kiBCXNYeDVPk2KgsX4?= =?utf-8?q?be5Y9DLxpJLrxdwNzp1IUt76D1SQs3pmxBKBIb0J4qWqMbnZSt+rt24cRZ5OEvxU3?= =?utf-8?q?mcQFUYFGeF8aGKKeJAYLu0LlDvEuVLWOIRQ4t2aFs345MqiHBG+9zfCLrVq3pOq/c?= =?utf-8?q?b2dkMkT7S2q5qYpd5Uu0jjpa6upMmZrjUSumOGGtm6wLbJNZYmUyJG6qpORj/skCu?= =?utf-8?q?E2edEltoOD+zOLY1mHf5IZyjZmE4i+YB367vKl8uNgf1U4SO/XMVRsKzhp130eG01?= =?utf-8?q?kSfmyN0htHxe0PV8JlMCZCNZaokD+wrOiqrXvEq4Lp3TKs4XRBMRHznChl1q0/d+5?= =?utf-8?q?wuUUcjYJtflNa3OJ7cNtpLbwTx0sahxOYtg2TDjuaW/9Dy+e+4g0FMEspFUdC3839?= =?utf-8?q?wAqgcSlaPgDtshVdCQuYzAIELtisS+HA2mDmKPgtH8i3QHQtfp+BF3/d7wW+zvAzp?= =?utf-8?q?v705GM3cGM4SMlMPP7+iZItDpBIlCd8/3n079CT7tiBm3N91l1WbKfIOdCQsCqwg9?= =?utf-8?q?/9cD8QUstY/9uGLAiDz1hajv3EZUoPIZE+db0WUK2pluqawDTKFGdL/oz1Yww6NH+?= =?utf-8?q?tskJuieo60H4/zN7fMjU6rUc8UGfH3ErQkdZz1zreFPyRnp6MZsHFoox5AsfO2iO9?= =?utf-8?q?Vn2X2qDBzzKVnkwcHTRWVzgjGOBC8FuUfNvDMA5Qqf0pxn6DZwynyt1RSt2kdxAPO?= =?utf-8?q?VZ9NVr6KqOmQ2PIHA5uxUifDKhD7Ds6XPWT0938beiY1H2llOqsrvdUFNBNyfHGve?= =?utf-8?q?aILlBysYRWJVq4CoWcPHVxH6NtwqffiAiggi3fhdQO6XKdRdyQgnA=3D=3D?= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: afc09440-6ded-494c-74f2-08dbc61b2b14 X-MS-Exchange-CrossTenant-AuthSource: BY5PR10MB4196.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2023 03:20:19.3489 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 4nKYXT4LI64LkTgOhix9K/LmoyAv+csFLQmlImpdkuNymj+4WHfIfhJz/HHyd1Q6MEGCmMIHZGMN/whVKs9t+A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR10MB6615 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-05_18,2023-10-05_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxlogscore=999 mlxscore=0 spamscore=0 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2309180000 definitions=main-2310060024 X-Proofpoint-GUID: zo8A4dgky_ArOSntpFMnNDkKU6qPXpLf X-Proofpoint-ORIG-GUID: zo8A4dgky_ArOSntpFMnNDkKU6qPXpLf X-Rspamd-Queue-Id: 3000FA0007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: x1ha1hwhap365dfo6jojxgmqzrzqud6e X-HE-Tag: 1696562468-618303 X-HE-Meta: U2FsdGVkX1/sQhnh1g1VII/FTtbFBkr3vC3RvdZn28W2LTBFEX7JweqgpZsz9MCJIDz6laheDpMw3rHciGhIXaV7BroAXrRblwQ7X9P15QUSnzdn34RTe4xBZUtQ0KKjvjtnr5ckEcILVqoMvL6Bp/c6bV5KeeN/r20/iZXQraj4voNLjUGV3Z7mg23ydjTJfKdVGvTS/Bk7rYdsjskCq5/7iBUhNl2pYurVCtEqoMDuwtPCMR8QsrJ0IrD5Bcs++935ckD7Qv7VTN+eb/ZXZcnjyAxVZEiy8Mu524qob1F7E+a0CQYW0uZJqjVGeGtXWvxRz+DGdzOgzNMcIUtDz7M/E+5tIZglJgwKMo0ITlT1ajoM6kxGAUqbqkJJYqeVM9Jcq0qBaxFaO2SbsdH2+zTamKlrvTv37YTh08Nei9Y4bWrLAQ7eJo87UA2vGRovuP4nR9vZmyB6Z8WT6Pt9q6/A/7IP2S1d80pN4O8Kvd6O2VRrEKt9cC4eXoLxJv0iNZU6HeFeFdLYkoZI7X1eizN7YhsCuo+6wFQCSEFExaSxa+rL4EeSzR3JhHmBWNBdC6tPmSOPRfzJ04g5rfMv4G/N7b9LdzRpxm5pv8r065KmcOJlO2d6mrpwtXXuqq9SxfYGoUOnVkZUHIrFynMkrwHMlZgCAGLntb26hpJ2EPI7pTN5EmOSOmLbz0EDAYeDBs6y41OVahxbA2G330McYIwikALBpdS7/oyCMZCk40QIB4IbWDR+kdsZYBA+tecw2g05qPC8i1i6kUVxfyiZVBIzAXnCsRnVIH9c8XIJ8fAuNmu5tqKcGL4QXrt/M7ejPYQtCXPoZzxBxf1uWwxcp26xiMVmn3nQOGiiD/STjDgqMm8A7GtDtKeiQ5KadMgXegWTis4WNe57gdS/6wWnk1V2ATXkoI8W7/WdSpqRjo8b3HGq3ogWmS5r+MnhxnufZDsZYHzCawH9lVd9qfv O7uJ6ZBJ 7JwcERcwRHKAFOLOF0uE7TuvdZy2Y5HCMldab/MZfjym9YEHSN5uLLucoYCqPus1uLOnFlAes4n6zUAzhVuEgBxjvuE7sO2keYFBtGwqkwhDgz7Ab9iSFWo573VkKoCznbbN64bpGfBHWPDR9h4NYD1oBdwDS3MncYcNH/TPli05xI2Ryc4260zxRJ51iVJrDDub6iuPICAbVPFiotAl+JPA951VzUGT6qNxRZc3cANDgNUJ2rQvOjsDW/a66yiLOl2EsB5qcJdrCByWI8EHJgxOrhqxRE7JoXWhCxoC8sZgSiU5/ogTedsiOhat4PeCg3npXROVvYIxO15Y5wpggl4EFB/iN+iPeRmBeLNfkqdUVKh6xrqk9HxkTgz8HEzgofacwSqbJXoHHepYJhT1IjyC88Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When hugetlb vmemmap optimization was introduced, the overhead of enabling the option was measured as described in commit 426e5c429d16 [1]. The summary states that allocating a hugetlb page should be ~2x slower with optimization and freeing a hugetlb page should be ~2-3x slower. Such overhead was deemed an acceptable trade off for the memory savings obtained by freeing vmemmap pages. It was recently reported that the overhead associated with enabling vmemmap optimization could be as high as 190x for hugetlb page allocations. Yes, 190x! Some actual numbers from other environments are: Bare Metal 8 socket Intel(R) Xeon(R) CPU E7-8895 ------------------------------------------------ Unmodified next-20230824, vm.hugetlb_optimize_vmemmap = 0 time echo 500000 > .../hugepages-2048kB/nr_hugepages real 0m4.119s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m4.477s Unmodified next-20230824, vm.hugetlb_optimize_vmemmap = 1 time echo 500000 > .../hugepages-2048kB/nr_hugepages real 0m28.973s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m36.748s VM with 252 vcpus on host with 2 socket AMD EPYC 7J13 Milan ----------------------------------------------------------- Unmodified next-20230824, vm.hugetlb_optimize_vmemmap = 0 time echo 524288 > .../hugepages-2048kB/nr_hugepages real 0m2.463s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m2.931s Unmodified next-20230824, vm.hugetlb_optimize_vmemmap = 1 time echo 524288 > .../hugepages-2048kB/nr_hugepages real 2m27.609s time echo 0 > .../hugepages-2048kB/nr_hugepages real 2m29.924s In the VM environment, the slowdown of enabling hugetlb vmemmap optimization resulted in allocation times being 61x slower. A quick profile showed that the vast majority of this overhead was due to TLB flushing. Each time we modify the kernel pagetable we need to flush the TLB. For each hugetlb that is optimized, there could be potentially two TLB flushes performed. One for the vmemmap pages associated with the hugetlb page, and potentially another one if the vmemmap pages are mapped at the PMD level and must be split. The TLB flushes required for the kernel pagetable, result in a broadcast IPI with each CPU having to flush a range of pages, or do a global flush if a threshold is exceeded. So, the flush time increases with the number of CPUs. In addition, in virtual environments the broadcast IPI can’t be accelerated by hypervisor hardware and leads to traps that need to wakeup/IPI all vCPUs which is very expensive. Because of this the slowdown in virtual environments is even worse than bare metal as the number of vCPUS/CPUs is increased. The following series attempts to reduce amount of time spent in TLB flushing. The idea is to batch the vmemmap modification operations for multiple hugetlb pages. Instead of doing one or two TLB flushes for each page, we do two TLB flushes for each batch of pages. One flush after splitting pages mapped at the PMD level, and another after remapping vmemmap associated with all hugetlb pages. Results of such batching are as follows: Bare Metal 8 socket Intel(R) Xeon(R) CPU E7-8895 ------------------------------------------------ next-20230824 + Batching patches, vm.hugetlb_optimize_vmemmap = 0 time echo 500000 > .../hugepages-2048kB/nr_hugepages real 0m4.719s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m4.245s next-20230824 + Batching patches, vm.hugetlb_optimize_vmemmap = 1 time echo 500000 > .../hugepages-2048kB/nr_hugepages real 0m7.267s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m13.199s VM with 252 vcpus on host with 2 socket AMD EPYC 7J13 Milan ----------------------------------------------------------- next-20230824 + Batching patches, vm.hugetlb_optimize_vmemmap = 0 time echo 524288 > .../hugepages-2048kB/nr_hugepages real 0m2.715s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m3.186s next-20230824 + Batching patches, vm.hugetlb_optimize_vmemmap = 1 time echo 524288 > .../hugepages-2048kB/nr_hugepages real 0m4.799s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m5.273s With batching, results are back in the 2-3x slowdown range. This series is based on mm-unstable (October 5) Changes v6 -> v7: - Fixed hugetlb_vmemmap_restore_folios stub for the !CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP case - Added Muchun RB for patches 4 and 8 Changes v5 -> v6: - patch 4 in bulk_vmemmap_restore_error remove folio from list before calling add_hugetlb_folio. - Added Muchun RB for patches 2 and 3 Changes v4 -> v5: - patch 3 comment style updated, unnecessary INIT_LIST_HEAD - patch 4 updated hugetlb_vmemmap_restore_folios to pass back number of restored folios in non-error case. In addition, routine passes back list of folios with vmemmmap. Naming more consistent. - patch 5 remover over optimization and added Muchun RB - patch 6 break and early return in ENOMEM case. Updated comments. Added Muchun RB. - patch 7 Updated comments about splitting failure. Added Muchun RB. - patch 8 Made comments consistent. Changes v3 -> v4: - Rebased on mm-unstable and dropped requisite patches. - patch 2 updated to take bootmem vmemmap initialization into account - patch 3 more changes for bootmem hugetlb pages. added routine prep_and_add_bootmem_folios. - patch 5 in hugetlb_vmemmap_optimize_folios on ENOMEM check for list_empty before freeing and retry. This is more important in subsequent patch where we flush_tlb_all after ENOMEM. Changes v2 -> v3: - patch 5 was part of an earlier series that was not picked up. It is included here as it helps with batching optimizations. - patch 6 hugetlb_vmemmap_restore_folios is changed from type void to returning an error code as well as an additional output parameter providing the number folios for which vmemmap was actually restored. The caller can then be more intelligent about processing the list. - patch 9 eliminate local list in vmemmap_restore_pte. The routine hugetlb_vmemmap_optimize_folios checks for ENOMEM and frees accumulated vmemmap pages while processing the list. - patch 10 introduce flags field to struct vmemmap_remap_walk and VMEMMAP_SPLIT_NO_TLB_FLUSH for not flushing during pass to split PMDs. - patch 11 rename flag VMEMMAP_REMAP_NO_TLB_FLUSH and pass in from callers. Changes v1 -> v2: - patch 5 now takes into account the requirement that only compound pages with hugetlb flag set can be passed to vmemmmap routines. This involved separating the 'prep' of hugetlb pages even further. The code dealing with bootmem allocations was also modified so that batching is possible. Adding a 'batch' of hugetlb pages to their respective free lists is now done in one lock cycle. - patch 7 added description of routine hugetlb_vmemmap_restore_folios (Muchun). - patch 8 rename bulk_pages to vmemmap_pages and let caller be responsible for freeing (Muchun) - patch 9 use 'walk->remap_pte' to determine if a split only operation is being performed (Muchun). Removed unused variable and hugetlb_optimize_vmemmap_key (Muchun). - Patch 10 pass 'flags variable' instead of bool to indicate behavior and allow for future expansion (Muchun). Single flag VMEMMAP_NO_TLB_FLUSH. Provide detailed comment about the need to keep old and new vmemmap pages in sync (Muchun). - Patch 11 pass flag variable as in patch 10 (Muchun). Joao Martins (2): hugetlb: batch PMD split for bulk vmemmap dedup hugetlb: batch TLB flushes when freeing vmemmap Mike Kravetz (6): hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles hugetlb: restructure pool allocations hugetlb: perform vmemmap optimization on a list of pages hugetlb: perform vmemmap restoration on a list of pages hugetlb: batch freeing of vmemmap pages hugetlb: batch TLB flushes when restoring vmemmap mm/hugetlb.c | 301 ++++++++++++++++++++++++++++++++++++------- mm/hugetlb_vmemmap.c | 273 +++++++++++++++++++++++++++++++++------ mm/hugetlb_vmemmap.h | 16 +++ 3 files changed, 507 insertions(+), 83 deletions(-)