From patchwork Wed Mar 19 19:22:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivank Garg X-Patchwork-Id: 14023041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 510CFC35FFF for ; Wed, 19 Mar 2025 19:24:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 50C7528000C; Wed, 19 Mar 2025 15:24:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4BCC5280001; Wed, 19 Mar 2025 15:24:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 30ED828000C; Wed, 19 Mar 2025 15:24:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0E6A4280001 for ; Wed, 19 Mar 2025 15:24:16 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 446E9A42FC for ; Wed, 19 Mar 2025 19:24:16 +0000 (UTC) X-FDA: 83239276512.19.4E561D4 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2043.outbound.protection.outlook.com [40.107.95.43]) by imf03.hostedemail.com (Postfix) with ESMTP id 4B2512000F for ; Wed, 19 Mar 2025 19:24:13 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=hca4S8Vb; spf=pass (imf03.hostedemail.com: domain of shivankg@amd.com designates 40.107.95.43 as permitted sender) smtp.mailfrom=shivankg@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742412253; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LpXcxvR43WjPNZga97MpcBVDDzX5U8hqC4yBCUzSERo=; b=B9+NOnK6k5ZBMDfvoJ+PkaJYjladzqfjdnrD6sM8xkMOI0n9IsIvdrWLDg+pSmXZiK9lN+ /HtPKtEY5DkGBjBfgGZ8M2phn2+3uKsr+bzxEFqX1rXPimEgNzp13dgTEgM9dWhdSao/w9 aiUFhv2j85FDY5B8hFv/GIzF1RNlug0= ARC-Authentication-Results: i=2; imf03.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=hca4S8Vb; spf=pass (imf03.hostedemail.com: domain of shivankg@amd.com designates 40.107.95.43 as permitted sender) smtp.mailfrom=shivankg@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1742412253; a=rsa-sha256; cv=pass; b=i0Tky/+ptY7cdDxwnHv17m0P1lYy1mdhkEPd2cOORIfw3SQC1laCYZdZ4NdxjufYGIJ7/P JQnZgjDYKM5/qsOiu2b/UP4zid/KERevP2dFGyBlnz7p1B7pOM65xGcP99pH/ze7GfyoBY 7ojw54mvE1x+HKtAyxWy3Ux9UeY8pFM= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=TvfQ621e1doNOrcBS1py9X0CzzNjVIY/YO1wRKFxunsk0WuwNc/foKf5vWEM++fHD7cqofmaxJmDdj0BbC8OsBPmWaPhbrKZpoFfCfPeZCCDv33AgCdUGBd2SOKlY/BJPqV4zJEyT/ZfG1SfgsrIDn8eT/yQh3fFdXG70iQL69E5Z6qjXBqzfRQsGmq3xNWwNy1/cQt61LRYqAWoBIRhSRJoPKfHvl0qnn5Y4JmXfmiPzLNk2ipT3BQ7NZMWOsakVW73Hta8hJP/lXty6DdxbDtqcLVmp57f0Qk4Qv5Fbh4bXXdv0nBbxQEe3E7fbq1+XXVx90A5jR01mBvbAdvodQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LpXcxvR43WjPNZga97MpcBVDDzX5U8hqC4yBCUzSERo=; b=zG0dbuRmRnykwBuMMrJrwMX2j+IwEeCKmmmJryqh6zFpVSeqh+LdZpJsZ92+cMaByba5M+LKWIJkqIkEY7fYdPDztcJR22iLvpzajm08cdy04YNbWewNC4fAYvSLNgFuLADZTPIt6hK+9rF9NSWR0ZerpJO8i16JX6H617TuEi8SMi5ePGPcOva/HJGulAFf8ew3rE03oWGOkD6ST915ohumsuWtPyc4zE6gd3wKQOe9VS42ylbo0ykMLPpoMnPCoGjTGPfpqPwuiqQatvs5vuFoY7MIO/qQCC3tSIigDhGbpmPZn3ZiDXVmSS/tbB6/Q8oi2hLGAuEH9605hMShrQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=linux-foundation.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LpXcxvR43WjPNZga97MpcBVDDzX5U8hqC4yBCUzSERo=; b=hca4S8VbCULl5rMm26fdTMYeH9s1LyBi50x0JiCk0gcNHn1UKAbHOViGmVAqcWhNOEJIxCmjHuueJ0z4XKRV3G6coc5eKz8r2/BJ8HgpWc1rgYJLcc1x8bSc5eFNav3AQE8XuNLAjRbY/RXkbcFHpnoOsqw/6SOK2qUpZjpkQDg= Received: from CH2PR14CA0036.namprd14.prod.outlook.com (2603:10b6:610:56::16) by LV2PR12MB5846.namprd12.prod.outlook.com (2603:10b6:408:175::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.34; Wed, 19 Mar 2025 19:24:06 +0000 Received: from CH1PEPF0000AD7A.namprd04.prod.outlook.com (2603:10b6:610:56:cafe::b3) by CH2PR14CA0036.outlook.office365.com (2603:10b6:610:56::16) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8534.34 via Frontend Transport; Wed, 19 Mar 2025 19:24:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CH1PEPF0000AD7A.mail.protection.outlook.com (10.167.244.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8534.20 via Frontend Transport; Wed, 19 Mar 2025 19:24:06 +0000 Received: from kaveri.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 19 Mar 2025 14:23:58 -0500 From: Shivank Garg To: , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC V2 7/9] dcbm: add dma core batch migrator for batch page offloading Date: Wed, 19 Mar 2025 19:22:10 +0000 Message-ID: <20250319192211.10092-8-shivankg@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250319192211.10092-1-shivankg@amd.com> References: <20250319192211.10092-1-shivankg@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD7A:EE_|LV2PR12MB5846:EE_ X-MS-Office365-Filtering-Correlation-Id: 2d478642-28ce-4af5-b404-08dd671b9dcf X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|82310400026|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: +xQDoEqiktFI/GctzlhTV4b01Jap78eIsK+aQeK8LsVChK7eoGmEprWSrQa2hCVr9wAYyNSu3UkSsoqfbYby0C96ZHqFDKyCvUgb86UJoeLowlLdbaU+sDtuX4c6nET7I7bYzzYi/qPnip1xpT9xAigd0j+tvg2SLxW9wTKlt2u1RsPpd+ESm+5RVI2yCITfHhxkU9PmSmn1n+LGyE3KwvThHLC8acf9TX5VJmzo0c5ojmKZGNlzg6gX3hlPlAMg3fhlry8SiWWf8NrywCNehRuXn3fcovRpMN+xFbZHWQa5+u/AAlSYfukQAF43tiKQakrfcL9r3fE+VLZTIVct1U0MCWZpt+DW7y+lpdnBKqC6nXk3DBLw13h6NY3FcA2JqevLOIeoX8WnHoZRkz6pzKjAUrfG31QbZ631LrGR8ysRgDZxj6ckTp6/KJVbpytD+n33+b9iSCPA5JpSnmO1YtDErZtj0IKl7j/yJLs0bzBlqf8v+S1jwGKv9y22ZqhqhAetz/lBHXCl1wEmgyLrh28BieVVO9N6bx2UBbTfiK7I+0ma+ERrj3nhS6x6w3si94oUZ25/GVncv1+fwnLU5bgdTvPLL1kuLoQPmeFkPlGglF3QyFxoqIRZq1oKZEVRuslndLBK5ZrToe4k6fqpkQalbgfa8sniTYZlFBOKfcQhf7IJ1pWJPIcdE1JNh4k6xz/AKPfX7iRt70e0XRnMzlB/TjKs1iAK/Qv9KS4QQVpaB/Y0E5eiPQRvkDjk8rQauhnAbSoWHc07UtL1w5XqX8mxSXV1KpbC1UwsYHTVZnXwjuqu154lZpe9boAl+gjYI+OC9T2rkv2X/DNSZHosLNeczQQHnWl5+FuKmlMqChFyHv8VmPO0B1BLB/NYsHbLBgw/s/ayXNMZDNRdp7zVzBVFHDFvQxskAfhD0cbY5AUuNEz+Vt0Fnf74bjGPjL8GTtzU5GvDuewQkI14WJXlaQUCypQ5MghooIGppzXfkTXufhGJdaHrT5SrM1IwhcuP0+HN6bF9zJXGzPDFcjGKLfYJTmNkpDlGSUXU/66dyqTg2pH5+gOs9eicKM3k6y/NslIdvRA/e+R4MG6YrjigV1FlKZloSzvLrO5HBqMKaVWB0CKrOGjBsARg9lRWsdRF38fFdIDO/KbqUJ6rYl+kHJ2ijBz8cwLasCq2FMdtDTXCXazl1nS3lCjtJeKNSjgCCC7TVPxUSRIhmlFpkPBHX6xDRzwCH99mqF0Dyd01gXiSLoEoR52wgNmHHhz2lkIkZb9kb95D/ukF7bDcv21wqQMkfsDzPqz/UnxDiu1v5ELs8DmWjj15V14AsAIAsbyH1Wdj5GUD+qQiFd24+0Ucd3qZuHX5OO2UGN72WZkn0rVUPu8jUCCK6cJwThy6Ks+cz3kwt7HgcvzJKTuP+J2rDys9VYoKxjxI344mgNw/4IOSQ3FDsgRZXoPj4/OulCrJIFAiQIadoIVbc5vw2znKtoHarMQA4aMzQh4FdBflFdU= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(7416014)(82310400026)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Mar 2025 19:24:06.5266 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2d478642-28ce-4af5-b404-08dd671b9dcf X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD7A.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5846 X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4B2512000F X-Stat-Signature: 1qcrikerg3eeoodskw1xey9zpdznk47u X-HE-Tag: 1742412253-659538 X-HE-Meta: U2FsdGVkX18oRTw4czFh0SzFxAKRo8woAy8ZuQwPifdhqXZx5PLIjUCrk6q2eMgduZn3bf/0WDA2QaLnGUEclcEUm/BMkt3cBDpdsdR2UpbVo1QCDUZqcrth6rVj1E3G3/aFBpx4eurcXzG+X5qWN2aMdlxrUrgnWcNq75BR/woDGrZHCw4xo5rGo76+DGT/dPYsddKCT4+IpbyKKILWeZOdczPJayWiTfi3Tl1SCIEMEStEctBOe881FsfFMIjm5oR9i0x7r7Ce+chVlOmiPK80SaYuyW2L72Dgn8l0PMIIOxt1AxO/v7hmf3aQVcaTzhawdWMGkQxHLf3XTA5KmHJ6QHWRkZt1uJsMhF81m2uE6KJ25thZbqiutGCSVp19IfNXOAxFToq8c2UrLsuPy71NuXg9GlyhwIIwu83KK9KPn4h9HZLxAHepSAM0ONWiANjclyBLevnvMbl2ECq/iD1+5q8DUNsNfZDZAkGzUCdFE/8HSc4jycRxYZbNKm1Crm6GmfRQQU9lqnHgxroC+b0odYfjG3kKfKfxb7TKaRa0sQL8LC6PrT7/SeanQFtCVnEJ05RQBjR5RxtyQu59VHUO8RzrMYMMmaLYJ+aP4IQTX/MFlJqu6u3T9Nx9U2XipJy6LJSJx1LkCDnMmOhsNmYrXO24u2a+IWPuJMVx/egTJkihNV6OzJUSJJvYkWXnphiTOvSAyidGtNUGleAQ/GYCFVfGXU7P/KngxhtA2KmWbpPe9RSgaGO10/h7rsZg8HGwaO9ubEr1MI5D1UtjwH946PEilUOEwvx/cwUTiCJqrl+XumxyD+pJPN38BPdfRi3gbb3BAihYIyRnCApJflt9u5sxJ5Q6sOWTXwYWcdbVegHkVrswo8Dk+wwOId6xpSg+08A2ElSfDQyYzb5jjsUufodMbk1yf44IOCtKB1x2JYSBWwDZXai3VhL5cpvhNyUFc+CHt57G8t2zXhm 86f2kH/v xDuv+yQFbdgLraMybCcyH0HeNnHEC2Iy9NbTU9D7cl1ZdjIqSFxrfOJL5rEN5YUQEcG4qmgEiDsz4dKK3u+P5wUr86EIHLts5jo6qaMZjfia06ZP2gdpJ31pkiGA13l9IM9eqiU633fsPqqIkrjZz+dTSmaNtSufATlQiIC+O3yCN1nbqwpn/gPXTGmXdbjwFf/0NMNBUp1CHjzrlxmaSiYdYT1EsaNnmN543aoQU6knQl8zq/j1Qk4W3yEYmm9ARZeriOXByv7gEwObXsNz5c8hOrpC1QluK+SEO1+ETBOv4Gygv0OCl39LfuKXkH9sGnPaU/envw/L9bQ9cVNNU8GOEcBnHHiKWS4zEsOcKjN5r61SLT/KWfvLjkhVbFwtXBC48297/OGIXQr9GLKXJhqC3navUJyAi+LPKTqTgOQw/rArBBbU4u9V/3sYsaO9XzM9L2ryylBQHX6/ayq9uJTvWk4Q/LZMnBQXVcxJCbW8KOa9cqFOdei2gjn2XN0elk3u8jIs6dDeQeTQLhnmlogprFH0yhA9F3BiR38eMSbqOI//IUdTTxWz5S/wt+r9+vk7v X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The dcbm (DMA core batch migrator) provides a generic interface using DMAEngine for end-to-end testing of the batch page migration offload feature. Enable DCBM offload: echo 1 > /sys/kernel/dcbm/offloading echo NR_DMA_CHAN_TO_USE > /sys/kernel/dcbm/nr_dma_chan Disable DCBM offload: echo 0 > /sys/kernel/dcbm/offloading Signed-off-by: Shivank Garg --- drivers/migoffcopy/Kconfig | 8 + drivers/migoffcopy/Makefile | 1 + drivers/migoffcopy/dcbm/Makefile | 1 + drivers/migoffcopy/dcbm/dcbm.c | 393 +++++++++++++++++++++++++++++++ 4 files changed, 403 insertions(+) create mode 100644 drivers/migoffcopy/dcbm/Makefile create mode 100644 drivers/migoffcopy/dcbm/dcbm.c diff --git a/drivers/migoffcopy/Kconfig b/drivers/migoffcopy/Kconfig index e73698af3e72..c1b2eff7650d 100644 --- a/drivers/migoffcopy/Kconfig +++ b/drivers/migoffcopy/Kconfig @@ -6,4 +6,12 @@ config MTCOPY_CPU Interface MT COPY CPU driver for batch page migration offloading. Say Y if you want to try offloading with MultiThreaded CPU copy APIs. +config DCBM_DMA + bool "DMA Core Batch Migrator" + depends on OFFC_MIGRATION && DMA_ENGINE + default n + help + Interface DMA driver for batch page migration offloading. + Say Y if you want to try offloading with DMAEngine APIs + based driver. diff --git a/drivers/migoffcopy/Makefile b/drivers/migoffcopy/Makefile index 0a3c356d67e6..dedc86ff54c1 100644 --- a/drivers/migoffcopy/Makefile +++ b/drivers/migoffcopy/Makefile @@ -1 +1,2 @@ obj-$(CONFIG_MTCOPY_CPU) += mtcopy/ +obj-$(CONFIG_DCBM_DMA) += dcbm/ diff --git a/drivers/migoffcopy/dcbm/Makefile b/drivers/migoffcopy/dcbm/Makefile new file mode 100644 index 000000000000..56ba47cce0f1 --- /dev/null +++ b/drivers/migoffcopy/dcbm/Makefile @@ -0,0 +1 @@ +obj-$(CONFIG_DCBM_DMA) += dcbm.o diff --git a/drivers/migoffcopy/dcbm/dcbm.c b/drivers/migoffcopy/dcbm/dcbm.c new file mode 100644 index 000000000000..185d8d2502fd --- /dev/null +++ b/drivers/migoffcopy/dcbm/dcbm.c @@ -0,0 +1,393 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * + * DMA batch-offlading interface driver + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. + */ + +/* + * This code exemplifies how to leverage mm layer's migration offload support + * for batch page offloading using DMA Engine APIs. + * Developers can use this template to write interface for custom hardware + * accelerators with specialized capabilities for batch page migration. + * This interface driver is end-to-end working and can be used for testing the + * patch series without special hardware given DMAEngine support is available. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define MAX_DMA_CHANNELS 16 + +static int is_dispatching; +static int nr_dma_chan; + +static int folios_copy_dma(struct list_head *dst_list, struct list_head *src_list, int folios_cnt); +static int folios_copy_dma_parallel(struct list_head *dst_list, struct list_head *src_list, int folios_cnt, int thread_count); +static bool can_migrate_dma(struct folio *dst, struct folio *src); + +static DEFINE_MUTEX(migratecfg_mutex); + +/* DMA Core Batch Migrator */ +struct migrator dmigrator = { + .name = "DCBM\0", + .migrate_offc = folios_copy_dma, + .can_migrate_offc = can_migrate_dma, + .owner = THIS_MODULE, +}; + +static ssize_t offloading_set(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t count) +{ + int ccode; + int action; + + ccode = kstrtoint(buf, 0, &action); + if (ccode) { + pr_debug("(%s:) error parsing input %s\n", __func__, buf); + return ccode; + } + + /* + * action is 0: User wants to disable DMA offloading. + * action is 1: User wants to enable DMA offloading. + */ + switch (action) { + case 0: + mutex_lock(&migratecfg_mutex); + if (is_dispatching == 1) { + stop_offloading(); + is_dispatching = 0; + } else + pr_debug("migration offloading is already OFF\n"); + mutex_unlock(&migratecfg_mutex); + break; + case 1: + mutex_lock(&migratecfg_mutex); + if (is_dispatching == 0) { + start_offloading(&dmigrator); + is_dispatching = 1; + } else + pr_debug("migration offloading is already ON\n"); + mutex_unlock(&migratecfg_mutex); + break; + default: + pr_debug("input should be zero or one, parsed as %d\n", action); + } + return sizeof(action); +} + +static ssize_t offloading_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%d\n", is_dispatching); +} + +static ssize_t nr_dma_chan_set(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t count) +{ + int ccode; + int action; + + ccode = kstrtoint(buf, 0, &action); + if (ccode) { + pr_err("(%s:) error parsing input %s\n", __func__, buf); + return ccode; + } + + if (action < 1) { + pr_err("%s: invalid value, at least 1 channel\n",__func__); + return -EINVAL; + } + if (action >= MAX_DMA_CHANNELS) + action = MAX_DMA_CHANNELS; + + mutex_lock(&migratecfg_mutex); + nr_dma_chan = action; + mutex_unlock(&migratecfg_mutex); + + return sizeof(action); +} + +static ssize_t nr_dma_chan_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%d\n", nr_dma_chan); +} + +static bool can_migrate_dma(struct folio *dst, struct folio *src) +{ + +// printk("folio_size %d\n",folio_size(src)); + if (folio_test_hugetlb(src) || folio_test_hugetlb(dst) || + folio_has_private(src) || folio_has_private(dst) || + (folio_nr_pages(src) != folio_nr_pages(dst))) { + pr_err("can NOT DMA migrate this folio %p\n",src); + return false; + } + return true; +} + +/** + * DMA channel and track its transfers + */ +struct dma_channel_work { + struct dma_chan *chan; + struct completion done; + int active_transfers; + spinlock_t lock; +}; + +/** + * Callback for DMA completion + */ +static void folios_dma_completion_callback(void *param) +{ + struct dma_channel_work *chan_work = param; + + spin_lock(&chan_work->lock); + chan_work->active_transfers--; + if (chan_work->active_transfers == 0) + complete(&chan_work->done); + spin_unlock(&chan_work->lock); +} + +/** + * process dma transfer: preparation part: map, prep_memcpy + */ +static int process_folio_dma_transfer(struct dma_channel_work *chan_work, + struct folio *src, struct folio *dst) +{ + struct dma_chan *chan = chan_work->chan; + struct dma_device *dev = chan->device; + struct device *dma_dev = dmaengine_get_dma_device(chan); + dma_cookie_t cookie; + struct dma_async_tx_descriptor *tx; + enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; + dma_addr_t srcdma_handle, dstdma_handle; + size_t data_size = folio_size(src); + + /* Map source and destination pages */ + srcdma_handle = dma_map_page(dma_dev, &src->page, 0, data_size, DMA_TO_DEVICE); + if (dma_mapping_error(dma_dev, srcdma_handle)) { + pr_err("src mapping error\n"); + return -ENOMEM; + } + + dstdma_handle = dma_map_page(dma_dev, &dst->page, 0, data_size, DMA_FROM_DEVICE); + if (dma_mapping_error(dma_dev, dstdma_handle)) { + pr_err("dst mapping error\n"); + dma_unmap_page(dma_dev, srcdma_handle, data_size, DMA_TO_DEVICE); + return -ENOMEM; + } + + /* Prepare DMA descriptor */ + tx = dev->device_prep_dma_memcpy(chan, dstdma_handle, srcdma_handle, + data_size, flags); + if (unlikely(!tx)) { + pr_err("prep_dma_memcpy error\n"); + dma_unmap_page(dma_dev, dstdma_handle, data_size, DMA_FROM_DEVICE); + dma_unmap_page(dma_dev, srcdma_handle, data_size, DMA_TO_DEVICE); + return -EBUSY; + } + + /* Set up completion callback */ + tx->callback = folios_dma_completion_callback; + tx->callback_param = chan_work; + + /* Submit DMA transaction */ + spin_lock(&chan_work->lock); + chan_work->active_transfers++; + spin_unlock(&chan_work->lock); + + cookie = tx->tx_submit(tx); + if (dma_submit_error(cookie)) { + pr_err("dma_submit_error\n"); + spin_lock(&chan_work->lock); + chan_work->active_transfers--; + spin_unlock(&chan_work->lock); + dma_unmap_page(dma_dev, dstdma_handle, data_size, DMA_FROM_DEVICE); + dma_unmap_page(dma_dev, srcdma_handle, data_size, DMA_TO_DEVICE); + return -EINVAL; + } + + return 0; +} + +/** + * Copy folios using DMA in parallel. + * Divide into chunks, submit to DMA channels. + * if error, falls back to CPU + * Note: return 0 for all cases as error is taken care. + * TODO: Add poison recovery support. + */ +int folios_copy_dma_parallel(struct list_head *dst_list, + struct list_head *src_list, + int folios_cnt_total, int thread_count) +{ + struct dma_channel_work *chan_works; + struct dma_chan **channels; + int i, actual_channels = 0; + struct folio *src, *dst; + dma_cap_mask_t mask; + int channel_idx = 0; + int failed = 0; + int ret; + + /* TODO: optimise actual number of channels needed + at what point DMA set-up overheads < mig cost for N folio*/ + thread_count = min(thread_count, folios_cnt_total); + + /* Allocate memory for channels */ + channels = kmalloc_array(thread_count, sizeof(struct dma_chan *), GFP_KERNEL); + if (unlikely(!channels)) { + pr_err("failed to allocate memory for channels\n"); + folios_copy(dst_list, src_list, folios_cnt_total); + return 0; + } + + /* Request DMA channels */ + dma_cap_zero(mask); + dma_cap_set(DMA_MEMCPY, mask); + for (i = 0; i < thread_count; i++) { + channels[i] = dma_request_channel(mask, NULL, NULL); + if (!channels[i]) { + pr_err("could only allocate %d DMA channels\n", i); + break; + } + actual_channels++; + } + + if (unlikely(actual_channels == 0)) { + pr_err("couldn't allocate any DMA channels, falling back to CPU copy\n"); + kfree(channels); + folios_copy(dst_list, src_list, folios_cnt_total); + return 0; + } + + /* Allocate work structures */ + chan_works = kmalloc_array(actual_channels, sizeof(*chan_works), GFP_KERNEL); + if (unlikely(!chan_works)) { + pr_err("failed to allocate memory for work structures\n"); + for (i = 0; i < actual_channels; i++) + dma_release_channel(channels[i]); + kfree(channels); + folios_copy(dst_list, src_list, folios_cnt_total); + return 0; + } + + /* Initialize work structures */ + for (i = 0; i < actual_channels; i++) { + chan_works[i].chan = channels[i]; + init_completion(&chan_works[i].done); + chan_works[i].active_transfers = 0; + spin_lock_init(&chan_works[i].lock); + } + + /* STEP 1: Submit all DMA transfers across all channels */ + dst = list_first_entry(dst_list, struct folio, lru); + list_for_each_entry(src, src_list, lru) { + ret = process_folio_dma_transfer(&chan_works[channel_idx], src, dst); + if (unlikely(ret)) { + /* Fallback to CPU */ + folio_copy(dst, src); + failed++; + } + + channel_idx = (channel_idx + 1) % actual_channels; + + dst = list_next_entry(dst, lru); + } + + /* STEP 2: Issue all pending DMA requests */ + for (i = 0; i < actual_channels; i++) { + dma_async_issue_pending(chan_works[i].chan); + } + + /* STEP 3: Wait for all DMA operations to complete */ + for (i = 0; i < actual_channels; i++) { + wait_for_completion(&chan_works[i].done); + } + + if (failed) + pr_err("processed %d fallback with CPU\n", failed); + + /* Release all resources */ + for (i = 0; i < actual_channels; i++) { + dma_release_channel(channels[i]); + } + + kfree(chan_works); + kfree(channels); + + return 0; +} + +/** + * Similar to folios_copy but use dma. + */ +static int folios_copy_dma(struct list_head *dst_list, + struct list_head *src_list, + int folios_cnt) +{ + return folios_copy_dma_parallel(dst_list, src_list, folios_cnt, nr_dma_chan); +} + +static struct kobject *kobj_ref; +static struct kobj_attribute offloading_attribute = __ATTR(offloading, 0664, + offloading_show, offloading_set); +static struct kobj_attribute nr_dma_chan_attribute = __ATTR(nr_dma_chan, 0664, + nr_dma_chan_show, nr_dma_chan_set); + +static int __init dma_module_init(void) +{ + int ret = 0; + + kobj_ref = kobject_create_and_add("dcbm", kernel_kobj); + if (!kobj_ref) + return -ENOMEM; + + ret = sysfs_create_file(kobj_ref, &offloading_attribute.attr); + if (ret) + goto out; + + ret = sysfs_create_file(kobj_ref, &nr_dma_chan_attribute.attr); + if (ret) + goto out; + + is_dispatching = 0; + nr_dma_chan = 1; + + return 0; +out: + kobject_put(kobj_ref); + return ret; +} + +static void __exit dma_module_exit(void) +{ + /* Stop the DMA offloading to unload the module */ + sysfs_remove_file(kobj_ref, &offloading_attribute.attr); + sysfs_remove_file(kobj_ref, &nr_dma_chan_attribute.attr); + kobject_put(kobj_ref); +} + +module_init(dma_module_init); +module_exit(dma_module_exit); + +/* DMA Core Batch Migrator */ +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Shivank Garg"); +MODULE_DESCRIPTION("DCBM");