From patchwork Wed Mar 19 19:22:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivank Garg X-Patchwork-Id: 14023040 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90C72C36000 for ; Wed, 19 Mar 2025 19:24:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AE2128000B; Wed, 19 Mar 2025 15:24:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95D17280001; Wed, 19 Mar 2025 15:24:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B13D28000B; Wed, 19 Mar 2025 15:24:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 587A1280001 for ; Wed, 19 Mar 2025 15:24:08 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8CA2680744 for ; Wed, 19 Mar 2025 19:24:08 +0000 (UTC) X-FDA: 83239276176.06.DB68055 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2075.outbound.protection.outlook.com [40.107.223.75]) by imf21.hostedemail.com (Postfix) with ESMTP id 7CBAA1C001B for ; Wed, 19 Mar 2025 19:24:05 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=q1bOdpHS; arc=pass ("microsoft.com:s=arcselector10001:i=1"); dmarc=pass (policy=quarantine) header.from=amd.com; spf=pass (imf21.hostedemail.com: domain of shivankg@amd.com designates 40.107.223.75 as permitted sender) smtp.mailfrom=shivankg@amd.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1742412245; a=rsa-sha256; cv=pass; b=huwQgOWyFUZeKOvx7KvqCNaB5SGsv0YttvkXp3PBGQGyPD7JGRc0nmSp4Yk+5jsp9Hr70H fvZRMEmhPqtVEsKQ3q57DVbTFXzZpYKJyb2R+qoIq4kgeLD/V/Ntw/WX4Er2TtX405iy11 lut+Wg8/sOcicUvvTY30KTpW6HFCDqM= ARC-Authentication-Results: i=2; imf21.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=q1bOdpHS; arc=pass ("microsoft.com:s=arcselector10001:i=1"); dmarc=pass (policy=quarantine) header.from=amd.com; spf=pass (imf21.hostedemail.com: domain of shivankg@amd.com designates 40.107.223.75 as permitted sender) smtp.mailfrom=shivankg@amd.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742412245; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=flOo3vLO2rJeiBptjrTWYFaJiojxSG0aBTYgf9G0Q9s=; b=CCf/ku9SX+oDmeQNyQlgz77Rsl+XZ/TxMDOxT6X9xV09YGnIicUxxgXfwD/Y2BW+hgvdKD Xy8lOgWH/7KnlGztdVdk6H/FAfL/fBedf/lrat/ms7iFC30j3FQpfxTE5xDLCSfHJZGu1y WgDJZHjpZPlGKz89qVMoQipdJQ7l/WA= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=lUieM0gYppq/RbWGvc33RPxKs0d9/Gd1bcKiCxDAgTiKlaZPQELdyyclNAp0ItjQ45NFgkyN+feBUK4LEZFDJN8S5XEIOOnl5/TArtIxm46iUlEPtUj6m7rsOFoyHXShfmvSlc0iMymPZRNwTvT2Gy2Ej2jHKVN24HAqcVDInIjY3ukOgB5rddjb8JenkOEUVtxp93x68jciSr8IpfuQZpoEgNRj4U9Y9TT8JBUBKQtBRD/6U7xgi29tvJbFi2i0/6ILP94i9mvyL6k9LZxuCVxLajkDyh/fWz8H1aCRad3hCvcfMIxiEr9rh1RKnGLAR9IDLrgU8mQyuwA0NXWVVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=flOo3vLO2rJeiBptjrTWYFaJiojxSG0aBTYgf9G0Q9s=; b=XlNAhSEkmKEAm1qJSKRwbsv9LLdFPftwEJ6AWI5pFnixBALHFlzEVwDzWzFzN6k14cwXDsxJlpacYp6K7sRJhqI29311CwGtXJgh/92VWyNCJP6WTHWC8gp+drtBV+4TyqL9ukUxyABCqo4ez8FMVi+bN0h4BmXhQlYT1BbmzQuuU2MDbiLGuDaye02tg2YUqM7pHP4lEeUVlbsV6THHPFxV8QgShM5dOr59kRrjAm7nuCl8M1W1ltTy+hvgKuJYtLrI/oBDr76mnxnv7zLkQVDNdLn9I2iuAr/Bkxdxf1BXfQnjvsYvD7ocQcq/wnSSQvvsqDL21EGwi+4q/7/TQw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=linux-foundation.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=flOo3vLO2rJeiBptjrTWYFaJiojxSG0aBTYgf9G0Q9s=; b=q1bOdpHSzm+929plILyPP2MaFBZQkRuCDkqUtOBtd/Bpi7Vu6FEMrc+WnyCAuZY4ZKCrElbZAMUm4osSJA1ejWM/CxmmY3d+zqzI7rIReOgfbq3hBxs24gid7m3Gh5avDGmD6SGXIdfs++9xVjAVqCrM9pqyMpVLmrGMueYnUxE= Received: from CH0PR07CA0015.namprd07.prod.outlook.com (2603:10b6:610:32::20) by DS0PR12MB8563.namprd12.prod.outlook.com (2603:10b6:8:165::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.34; Wed, 19 Mar 2025 19:23:59 +0000 Received: from CH1PEPF0000AD76.namprd04.prod.outlook.com (2603:10b6:610:32:cafe::de) by CH0PR07CA0015.outlook.office365.com (2603:10b6:610:32::20) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8534.34 via Frontend Transport; Wed, 19 Mar 2025 19:23:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CH1PEPF0000AD76.mail.protection.outlook.com (10.167.244.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8534.20 via Frontend Transport; Wed, 19 Mar 2025 19:23:58 +0000 Received: from kaveri.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 19 Mar 2025 14:23:50 -0500 From: Shivank Garg To: , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC V2 6/9] mm/migrate: introduce multi-threaded page copy routine Date: Wed, 19 Mar 2025 19:22:09 +0000 Message-ID: <20250319192211.10092-7-shivankg@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250319192211.10092-1-shivankg@amd.com> References: <20250319192211.10092-1-shivankg@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD76:EE_|DS0PR12MB8563:EE_ X-MS-Office365-Filtering-Correlation-Id: 782e3ac6-43ea-462c-fa77-08dd671b9926 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|82310400026|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: MzvXbJP6hhfW9pg5Pbpd3JDkG0nWx8nG9XvPpl717xPcaQv4uot6gkGyEiUVeZvXgeIxYEkOyIf4XHlhIh3wQnpxTgkY6XtZK8rVUhH0JcXam8ljtyE3IJUduhyUgQFxV/mXillyndg5fUo5CWpJvuvtEPwUiGbjZjRzLe0aWzcpRrnp6PqrK3y1KPqndhQ4/uJovRFRaVDKDlwAlszv0ifWXjdPIClrOuPhLV8T9BNb2gF/NpgMemk344gFgAOR2avQf/a0CfMJ2xRn7kxbC612au4spFSikE6xslp209HBGcF1CqA9ztLUOOZ1+WwACxDyhvQzEVq/Dxhed1qLurBSah7CfrNC9y52G4yPE+2H7LuiUWQxRNBcy5itCLxfBfftDXwlW+bkGpn74reEb6WVWAV/J6FKMz7McW8Xvn5mlmf6w2hzKAAGUXreKiqGWtTydGOauGzhIP4aE3ZRr3YFA2ICq5c/MKOQQQG1ZiOvVpmFfEzYu8NMjiDLqA7QBNHZGGCvDuzBw8CBXnbgZeYBgdpYcPdQk/One5B5yYt9JBUgKcBMS7HuCg7CsOwk0N+XZTM44tt3ENHTL9R6zyALqxwMEEZt/hVfqGFkE1vOT13RH3BRic/WM0370feNDrEyLXiNm2VAT9jLi4FqdhqmGx5gJwnuBY/o4UoWu0EzdYIeC5bSEXyBYmnm5hoEN8XaIvdzzNIsECJqLfZ9QMsNXtMJAYoSQeGRTg/iZ1SpMheHmORJgzzOPlq4oqqvPW2nqTbwjgrcjckZXS79SHd+m4EXSNCdHyEduJcKdNiXzf0De2ajW+SlJnrIDm7e5ScNAhO18M4cui4WZe0lNSPy7Kjfnkz2jl96jlcw7ek3ZPyG9VmpEzfrabUEJSGN6l2lExoQZHrWXemFsE8/ZuDyquK/BNV4OdANHKK2AnlODqSmmnVExA//i5xtU+5cyMUlH7dFn6vMYHm1e3IjNN4aPcNnrnrSiWZYlybhA3xrMOH3Hz8EFfdVODPuK2sYvPVurPVns/iAHeGSOgSfrwyll79zJqdElmbr/4KpEz1/wGrLP6L8zARSI9gPLF9ZBRgOBTO/M42LWKn0V5imSUrA2fdMNaCT3NVlM2pirGrBUungPEVQ2VbCSDumN43qdDUWzadvJYhlPTOvgSoR9Nj9AsogWb8yvMJIgc/KH5EVtqjJALOecFf3giskAXEx2j41PoFbgwmFadzCaub0pJZhkbMq+1O+CI6rdM51/GaxBX7wQPG/ssoqiOdycQG7EmMro9Rdfzdl3FzN5GhrIxbJU/nMjAc61CBF68IYUlQ+PgjdamyajH5idcbCL8akjXd+2+nxJNim98/YRsYdLBrJ2eDQiU5f7yoIZfiWLDyjCJOVJaT0SuNgQe1PbN9v6NqWFNWqxNc1EmDiwQahnvHbDXKY1UYAHI7V6xCiEd9BHsX4duz2Y+iv2zeD8Ny+F70sLS9sfC/0Z2o7SuWefqABm3otTe+5WRaZ7o1cqB8= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(7416014)(82310400026)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Mar 2025 19:23:58.5849 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 782e3ac6-43ea-462c-fa77-08dd671b9926 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD76.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8563 X-Rspam-User: X-Rspamd-Queue-Id: 7CBAA1C001B X-Rspamd-Server: rspam05 X-Stat-Signature: 3ibwqw5ne679no5szyd5qszyjyooagab X-HE-Tag: 1742412245-309728 X-HE-Meta: U2FsdGVkX1+bUA85nP9GFSASNpzmRZL5dSbCKaLLwtATgMCFRD5N34gXVrGEaBcv62ZgpO2BfOd6Zz6Elbg0QY1LVcnOvIhF1W+AG3l3cObrVuKnXRSlP43pnHR3ptgEMK4N0su8Qnyzz/HWZ18mwTPXTrliU+hBcjgLbumJkqh8oaAp8cxHxLjd2276YuJprkru/yLHnDy+Hv7XMOQb22gyWrXkOig3/abHU7/Z8raJvE5VaMoe8qm5vyN6Uh7+jQq47/G6OXEPLA3magwZnEGmf9SnJil5QBmn9DP6FOL8m/rrlXEKYo8K//QkYVsyYfqpQcZufw0bO7NIObZJUi3KnOJu/ubEoVjRJWJFuF/uSSvWtINDxFqAXin2ZR5NereVzno0CwbxZST707Pw8kZYZ4kEZh7R1pCxXxDwGqkKfHSAB9MVAjDrKUW6a0NGncAAlSiyUgDOsmxtfeT0uAEyXXCHq0R7P3nHltH4Oy64aMZmvxV4UPEJ04Q27MmJB/iwFB12g8c2EAZhOrFBH2YKOJJzciBAXeR2f+/pNfrwQSJt/14ZguRjuaqDqNKjbb50HFdponugfoHXMpojvdz3eae29qdslbVBo5k3QnZfWFsfAIvSbQDIJAEjw0isxP+vV7OirwqJ8hgh7q+MUoxNyyNQrTbXAzMPHZg/QEr4bbnK2Wpr+UtBm7VPATuBXsJeBBlgjk4BIb5hDAYm9XL2/NYIZKU5ay0Oth0/PqMqBR69nxrZ4/9kYXP6Pb5O52RPoLtuBIX7k0XsJ7gMkPtSmqvf4zaOB2q6N6Y4gn5z4zchVWI3RWPXgo6lFWffeaQitqS+gpmifkQdQXOhvdcL+UymQ+4iA3nzHvKq5LfTwqs13nrj2r2ebY9vonioiNIcN2kibNZ1nE2sxJUGpgQhfAqh13Z2iwLnvXyhfsx0T5mt16LYtsEjEDfE/I+t25gs3geVC1BF80gZuIu soNlFh2R 3VK7Cb8bSSJ/kD87+D0qWO4rHNo+SgoYBCHlb9BvuS7FJMWcN2OWpR9qiKTteY4kpdXFIu2Xh4eoX3V2ndOdgAbUxbVpH0gC7yrSDwmHUj9ZjVbp6Pf96zgR0NVQYL+ft0ecGJpiyYyHMrOUiLRRx+F+JHyuijKc9+y9Y9WVRQpQ5tqprhyuKnFFbSPMoW1rdm/4NMIDOcdhuU4K74L5tVF18l+eLu++f5vzci2+L97roJlUeU7u9hCy/r9KdUbf30TyZ4IXLDaPR/QJcbLdGk9rssWZlYo0TtWESQgezKzmE/7/iMMvBUcVX8g/g+GrWxJNNhLS/HweqKQdXDvybuVLLdiVCZn56IzV/eHV0n7Q6oR0ir7BJY7HulFVR9Mrm2xeiXOn9b/CBV2PBmbEFqttmuotOPHkEDVN0dnNE9I05fchkdKP059WT0fvVi7S1jY5HmXQgr1MrVJl7MUfL624kncv2I0KW/cLGpVWQNxMw882N7VPEI373qwfGn+MtAX9YGFi5xJMX8Pgy+jBWVAVdqzruKua70pSMNVBZVRLwFQLeRg1SOcjRVfKvACF95WQJF4j0mZbcUgiCkcO7+hQDPQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Zi Yan Now page copies are batched, multi-threaded page copy can be used to increase page copy throughput. Enable using: echo 1 > /sys/kernel/cpu_mt/offloading echo NR_THREADS > /sys/kernel/cpu_mt/threads Disable: echo 0 > /sys/kernel/cpu_mt/offloading [Shivank: Convert the original MT copy_pages implementation into a module, leveraging migrate offload infrastructure and sysfs interface.] Signed-off-by: Zi Yan Signed-off-by: Shivank Garg --- drivers/Kconfig | 2 + drivers/Makefile | 3 + drivers/migoffcopy/Kconfig | 9 + drivers/migoffcopy/Makefile | 1 + drivers/migoffcopy/mtcopy/Makefile | 1 + drivers/migoffcopy/mtcopy/copy_pages.c | 337 +++++++++++++++++++++++++ mm/migrate.c | 11 +- 7 files changed, 357 insertions(+), 7 deletions(-) create mode 100644 drivers/migoffcopy/Kconfig create mode 100644 drivers/migoffcopy/Makefile create mode 100644 drivers/migoffcopy/mtcopy/Makefile create mode 100644 drivers/migoffcopy/mtcopy/copy_pages.c diff --git a/drivers/Kconfig b/drivers/Kconfig index 7bdad836fc62..2e20eb83cd0b 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig @@ -245,4 +245,6 @@ source "drivers/cdx/Kconfig" source "drivers/dpll/Kconfig" +source "drivers/migoffcopy/Kconfig" + endmenu diff --git a/drivers/Makefile b/drivers/Makefile index 45d1c3e630f7..4df928a36ea3 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -42,6 +42,9 @@ obj-y += clk/ # really early. obj-$(CONFIG_DMADEVICES) += dma/ +# Migration copy Offload +obj-$(CONFIG_OFFC_MIGRATION) += migoffcopy/ + # SOC specific infrastructure drivers. obj-y += soc/ obj-$(CONFIG_PM_GENERIC_DOMAINS) += pmdomain/ diff --git a/drivers/migoffcopy/Kconfig b/drivers/migoffcopy/Kconfig new file mode 100644 index 000000000000..e73698af3e72 --- /dev/null +++ b/drivers/migoffcopy/Kconfig @@ -0,0 +1,9 @@ +config MTCOPY_CPU + bool "Multi-Threaded Copy with CPU" + depends on OFFC_MIGRATION + default n + help + Interface MT COPY CPU driver for batch page migration + offloading. Say Y if you want to try offloading with + MultiThreaded CPU copy APIs. + diff --git a/drivers/migoffcopy/Makefile b/drivers/migoffcopy/Makefile new file mode 100644 index 000000000000..0a3c356d67e6 --- /dev/null +++ b/drivers/migoffcopy/Makefile @@ -0,0 +1 @@ +obj-$(CONFIG_MTCOPY_CPU) += mtcopy/ diff --git a/drivers/migoffcopy/mtcopy/Makefile b/drivers/migoffcopy/mtcopy/Makefile new file mode 100644 index 000000000000..b4d7da85eda9 --- /dev/null +++ b/drivers/migoffcopy/mtcopy/Makefile @@ -0,0 +1 @@ +obj-$(CONFIG_MTCOPY_CPU) += copy_pages.o diff --git a/drivers/migoffcopy/mtcopy/copy_pages.c b/drivers/migoffcopy/mtcopy/copy_pages.c new file mode 100644 index 000000000000..4c9c7d90c9fd --- /dev/null +++ b/drivers/migoffcopy/mtcopy/copy_pages.c @@ -0,0 +1,337 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Parallel page copy routine. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define MAX_NUM_COPY_THREADS 64 + +unsigned int limit_mt_num = 4; +static int is_dispatching; + +static int copy_page_lists_mt(struct list_head *dst_folios, + struct list_head *src_folios, int nr_items); +static bool can_migrate_mt(struct folio *dst, struct folio *src); + +static DEFINE_MUTEX(migratecfg_mutex); + +/* CPU Multithreaded Batch Migrator */ +struct migrator cpu_migrator = { + .name = "CPU_MT_COPY\0", + .migrate_offc = copy_page_lists_mt, + .can_migrate_offc = can_migrate_mt, + .owner = THIS_MODULE, +}; + +struct copy_item { + char *to; + char *from; + unsigned long chunk_size; +}; + +struct copy_page_info { + struct work_struct copy_page_work; + int ret; + unsigned long num_items; + struct copy_item item_list[]; +}; + +static unsigned long copy_page_routine(char *vto, char *vfrom, + unsigned long chunk_size) +{ + return copy_mc_to_kernel(vto, vfrom, chunk_size); +} + +static void copy_page_work_queue_thread(struct work_struct *work) +{ + struct copy_page_info *my_work = (struct copy_page_info *)work; + int i; + + my_work->ret = 0; + for (i = 0; i < my_work->num_items; ++i) + my_work->ret |= !!copy_page_routine(my_work->item_list[i].to, + my_work->item_list[i].from, + my_work->item_list[i].chunk_size); +} + +static ssize_t mt_offloading_set(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t count) +{ + int ccode; + int action; + + ccode = kstrtoint(buf, 0, &action); + if (ccode) { + pr_debug("(%s:) error parsing input %s\n", __func__, buf); + return ccode; + } + + /* + * action is 0: User wants to disable MT offloading. + * action is 1: User wants to enable MT offloading. + */ + switch (action) { + case 0: + mutex_lock(&migratecfg_mutex); + if (is_dispatching == 1) { + stop_offloading(); + is_dispatching = 0; + } else + pr_debug("MT migration offloading is already OFF\n"); + mutex_unlock(&migratecfg_mutex); + break; + case 1: + mutex_lock(&migratecfg_mutex); + if (is_dispatching == 0) { + start_offloading(&cpu_migrator); + is_dispatching = 1; + } else + pr_debug("MT migration offloading is already ON\n"); + mutex_unlock(&migratecfg_mutex); + break; + default: + pr_debug("input should be zero or one, parsed as %d\n", action); + } + return sizeof(action); +} + +static ssize_t mt_offloading_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%d\n", is_dispatching); +} + +static ssize_t mt_threads_set(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t count) +{ + int ccode; + unsigned int threads; + + ccode = kstrtouint(buf, 0, &threads); + if (ccode) { + pr_debug("(%s:) error parsing input %s\n", __func__, buf); + return ccode; + } + + if (threads > 0 && threads <= MAX_NUM_COPY_THREADS) { + mutex_lock(&migratecfg_mutex); + limit_mt_num = threads; + mutex_unlock(&migratecfg_mutex); + pr_debug("MT threads set to %u\n", limit_mt_num); + } else { + pr_debug("Invalid thread count. Must be between 1 and %d\n",MAX_NUM_COPY_THREADS); + return -EINVAL; + } + + return count; +} + +static ssize_t mt_threads_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%u\n", limit_mt_num); +} + +static bool can_migrate_mt(struct folio *dst, struct folio *src) +{ + return true; +} + +int copy_page_lists_mt(struct list_head *dst_folios, + struct list_head *src_folios, int nr_items) +{ + struct copy_page_info *work_items[MAX_NUM_COPY_THREADS] = {0}; + unsigned int total_mt_num = limit_mt_num; + struct folio *src, *src2, *dst, *dst2; + int max_items_per_thread; + int item_idx; + int err = 0; + int cpu; + int i; + + if (IS_ENABLED(CONFIG_HIGHMEM)) + return -ENOTSUPP; + + if (total_mt_num > MAX_NUM_COPY_THREADS) + total_mt_num = MAX_NUM_COPY_THREADS; + + /* Each threads get part of each page, if nr_items < totla_mt_num */ + if (nr_items < total_mt_num) + max_items_per_thread = nr_items; + else + max_items_per_thread = (nr_items / total_mt_num) + + ((nr_items % total_mt_num) ? 1 : 0); + + + for (cpu = 0; cpu < total_mt_num; ++cpu) { + work_items[cpu] = kzalloc(sizeof(struct copy_page_info) + + sizeof(struct copy_item) * + max_items_per_thread, + GFP_NOWAIT); + if (!work_items[cpu]) { + err = -ENOMEM; + goto free_work_items; + } + } + + if (nr_items < total_mt_num) { + for (cpu = 0; cpu < total_mt_num; ++cpu) { + INIT_WORK((struct work_struct *)work_items[cpu], + copy_page_work_queue_thread); + work_items[cpu]->num_items = max_items_per_thread; + } + + item_idx = 0; + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(src, src2, src_folios, lru) { + unsigned long chunk_size = PAGE_SIZE * folio_nr_pages(src) / total_mt_num; + char *vfrom = page_address(&src->page); + char *vto = page_address(&dst->page); + + VM_WARN_ON(PAGE_SIZE * folio_nr_pages(src) % total_mt_num); + VM_WARN_ON(folio_nr_pages(dst) != folio_nr_pages(src)); + + for (cpu = 0; cpu < total_mt_num; ++cpu) { + work_items[cpu]->item_list[item_idx].to = + vto + chunk_size * cpu; + work_items[cpu]->item_list[item_idx].from = + vfrom + chunk_size * cpu; + work_items[cpu]->item_list[item_idx].chunk_size = + chunk_size; + } + + item_idx++; + dst = dst2; + dst2 = list_next_entry(dst, lru); + } + + for (cpu = 0; cpu < total_mt_num; ++cpu) + queue_work(system_unbound_wq, + (struct work_struct *)work_items[cpu]); + } else { + int num_xfer_per_thread = nr_items / total_mt_num; + int per_cpu_item_idx; + + + for (cpu = 0; cpu < total_mt_num; ++cpu) { + INIT_WORK((struct work_struct *)work_items[cpu], + copy_page_work_queue_thread); + + work_items[cpu]->num_items = num_xfer_per_thread + + (cpu < (nr_items % total_mt_num)); + } + + cpu = 0; + per_cpu_item_idx = 0; + item_idx = 0; + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(src, src2, src_folios, lru) { + work_items[cpu]->item_list[per_cpu_item_idx].to = + page_address(&dst->page); + work_items[cpu]->item_list[per_cpu_item_idx].from = + page_address(&src->page); + work_items[cpu]->item_list[per_cpu_item_idx].chunk_size = + PAGE_SIZE * folio_nr_pages(src); + + VM_WARN_ON(folio_nr_pages(dst) != + folio_nr_pages(src)); + + per_cpu_item_idx++; + item_idx++; + dst = dst2; + dst2 = list_next_entry(dst, lru); + + if (per_cpu_item_idx == work_items[cpu]->num_items) { + queue_work(system_unbound_wq, + (struct work_struct *)work_items[cpu]); + per_cpu_item_idx = 0; + cpu++; + } + } + if (item_idx != nr_items) + pr_warn("%s: only %d out of %d pages are transferred\n", + __func__, item_idx - 1, nr_items); + } + + /* Wait until it finishes */ + for (i = 0; i < total_mt_num; ++i) { + flush_work((struct work_struct *)work_items[i]); + /* retry if any copy fails */ + if (work_items[i]->ret) + err = -EAGAIN; + } + +free_work_items: + for (cpu = 0; cpu < total_mt_num; ++cpu) + kfree(work_items[cpu]); + + return err; +} + +static struct kobject *mt_kobj_ref; +static struct kobj_attribute mt_offloading_attribute = __ATTR(offloading, 0664, + mt_offloading_show, mt_offloading_set); +static struct kobj_attribute mt_threads_attribute = __ATTR(threads, 0664, + mt_threads_show, mt_threads_set); + +static int __init cpu_mt_module_init(void) +{ + int ret = 0; + + mt_kobj_ref = kobject_create_and_add("cpu_mt", kernel_kobj); + if (!mt_kobj_ref) + return -ENOMEM; + + ret = sysfs_create_file(mt_kobj_ref, &mt_offloading_attribute.attr); + if (ret) + goto out_offloading; + + ret = sysfs_create_file(mt_kobj_ref, &mt_threads_attribute.attr); + if (ret) + goto out_threads; + + is_dispatching = 0; + + return 0; + +out_threads: + sysfs_remove_file(mt_kobj_ref, &mt_offloading_attribute.attr); +out_offloading: + kobject_put(mt_kobj_ref); + return ret; +} + +static void __exit cpu_mt_module_exit(void) +{ + /* Stop the MT offloading to unload the module */ + mutex_lock(&migratecfg_mutex); + if (is_dispatching == 1) { + stop_offloading(); + is_dispatching = 0; + } + mutex_unlock(&migratecfg_mutex); + + sysfs_remove_file(mt_kobj_ref, &mt_threads_attribute.attr); + sysfs_remove_file(mt_kobj_ref, &mt_offloading_attribute.attr); + kobject_put(mt_kobj_ref); +} + +module_init(cpu_mt_module_init); +module_exit(cpu_mt_module_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Zi Yan"); +MODULE_DESCRIPTION("CPU_MT_COPY"); /* CPU Multithreaded Batch Migrator */ diff --git a/mm/migrate.c b/mm/migrate.c index 862a3d1eff60..e74dbc7a4758 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1831,18 +1831,13 @@ static void migrate_folios_batch_move(struct list_head *src_folios, int *nr_retry_pages) { struct folio *folio, *folio2, *dst, *dst2; - int rc, nr_pages = 0, nr_batched_folios = 0; + int rc, nr_pages = 0, total_nr_pages = 0, nr_batched_folios = 0; int old_page_state = 0; struct anon_vma *anon_vma = NULL; int is_thp = 0; LIST_HEAD(err_src); LIST_HEAD(err_dst); - if (mode != MIGRATE_ASYNC) { - *retry += 1; - return; - } - /* * Iterate over the list of locked src/dst folios to copy the metadata */ @@ -1892,8 +1887,10 @@ static void migrate_folios_batch_move(struct list_head *src_folios, old_page_state & PAGE_WAS_MAPPED, anon_vma, true, ret_folios); migrate_folio_undo_dst(dst, true, put_new_folio, private); - } else /* MIGRATEPAGE_SUCCESS */ + } else { /* MIGRATEPAGE_SUCCESS */ + total_nr_pages += nr_pages; nr_batched_folios++; + } dst = dst2; dst2 = list_next_entry(dst, lru);