From patchwork Wed Oct 27 09:56:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12586941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBF51C43217 for ; Wed, 27 Oct 2021 09:59:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C9D2861052 for ; Wed, 27 Oct 2021 09:59:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241381AbhJ0KBc (ORCPT ); Wed, 27 Oct 2021 06:01:32 -0400 Received: from mail-mw2nam12on2069.outbound.protection.outlook.com ([40.107.244.69]:57572 "EHLO NAM12-MW2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S241375AbhJ0KA4 (ORCPT ); Wed, 27 Oct 2021 06:00:56 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QDEXL+i+gju1MFSXRYE8n+t+Zz7up9R5IlVrOAVFhla+DiNds6Exnc5w/YDTBRJZ/mimd/kRQjNpQnXBtihzZuLAzEDDbfIwm1YVgoF3vXhFhABKOK7A/eXVHjB4+7AU4aTcTkosJKfFlqrWuLOwL15bNm2p6xf1ZeH9bNdif3as4dkPKI9Me590DJILl/AdpLgeOuZCIEozoaQTxkEZu9K23hTGj6jKVyCfn0c5T1QWv31y9ytELE6N9/YSkSPmQYkxbaKrg3n0WNlPz34REFSuP10sfEUWHxf9iCDZfeh9eLR0bYVnA0YABRFmHD5Lq9+N6WWiInXM8x/wz2EDYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dpimkJKPyOZW1OGTSaIeJ3yWU0ox8CDVYx7eeU0n7z0=; b=oCxkUV+64D4O6Aom72K36eY+QrK8LWd0i/T8mFz/CtQJBWCOiu3cqXXQtRdlYYmqziO5HU9RP/J9WqrvnuRA+8GhDB5ZiP7bE2WTNzhE5mKf2QV/qCskD1gHzel4vMF0bwuU5iaNnab/8adswkFgllNjFEvi4rkTj73PIA3Hry8dI7obw0tI3PywrM1iXHGQTBgjbL3SVqzQbiHSMYewUghKj7uQS6zD0LRE34RUp299gDzCF2VQtaMSOHd2PC3l1FDtNoimg70VqQvkTAXg7jDhBx/SL9msNC0Sp/6ufEiIYxq/XqZUlEb8Q28IVHFlKdVoqVYh2+1fcWSaBL6KFg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dpimkJKPyOZW1OGTSaIeJ3yWU0ox8CDVYx7eeU0n7z0=; b=Om5HiZe5RBr9Ef34nlGiL5vYkrVDl5QmkHohVBBq02ro67gf7+yj9ODJRoBQJxT8WWknaAdztucpN9so9fUTuaM5hdAkerhvVWtyFHYhEy5aWaW8aE1Cr5sKBzT2oVYMkElyhxNBRNoQnVZHZtDg0wUEWNh5pycpQXfecMj6JQz+oUXMTLHb/HpL7hfz22tY+DTgyrSppVfPesqYz11agNDdc5X0Pm08l2R27OV2bA8bROJpCUBsHeeuRHFb3gSi8ob7hF0f1vF13Sb7Iie3XtMJHaKp8GGZ0VnR8Mwe1WC+9thtJzjaWqEyMZx+dV9cyq0gmr/OSmOCefdiAyf89w== Received: from BN9PR03CA0879.namprd03.prod.outlook.com (2603:10b6:408:13c::14) by MWHPR1201MB0046.namprd12.prod.outlook.com (2603:10b6:301:5b::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.20; Wed, 27 Oct 2021 09:58:29 +0000 Received: from BN8NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13c:cafe::a5) by BN9PR03CA0879.outlook.office365.com (2603:10b6:408:13c::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18 via Frontend Transport; Wed, 27 Oct 2021 09:58:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT003.mail.protection.outlook.com (10.13.177.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4649.14 via Frontend Transport; Wed, 27 Oct 2021 09:58:29 +0000 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 27 Oct 2021 09:58:28 +0000 Received: from vdi.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.18) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 27 Oct 2021 09:58:25 +0000 From: Yishai Hadas To: , , , CC: , , , , , , , , Subject: [PATCH V5 mlx5-next 11/13] vfio/mlx5: Implement vfio_pci driver for mlx5 devices Date: Wed, 27 Oct 2021 12:56:56 +0300 Message-ID: <20211027095658.144468-12-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20211027095658.144468-1-yishaih@nvidia.com> References: <20211027095658.144468-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7078f3f2-3b4c-4ff2-a11b-08d9993053c3 X-MS-TrafficTypeDiagnostic: MWHPR1201MB0046: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NGgAsv77cxE5wzzifn/Yl5xYwZSq0Q7IsTRic5wMukfVTM0sdHzjTESZG1uEymCtjpI49C19jxfn+lpEPyccMpAvLuyuOtwatFvkAqaitKKC3jUDklXga+h2rcvotBTeuKZNLxuHWwbB14fpnk/wIxNkv2lLIE8NAXg5YQAx1FGx3AnXW/AKz3CBmBM0QZk4N2Wjcj0lkYmxnggmY2SiIkW5Gte3utbuZ3SAKklHPJzGHKpFNuCJD+ff0mRWRi/HENEVfqO+GYhtTxsU7y7iRKlSMU4LFYHAOH11QMbbPxeX1gFhmCT5inf+TA5XGOL1FJpVhk2vYpR4w7Bhmu4v36SmxbYmKf1eSnPjTRV7k92ULl9RiAUvy2LQL6e0F44GrdsVumUBn6wkSrlSRWQrdjQ7rWAxTVpzgouYeGFDDz1rq0PZRb8aqQPGUHL5OTdotdQ7u3sLcMeO+OfH6U17lcPIoeQRYJU8mtbJtVApbHIkiXOwgu7fHRGgUSpdIbFoDWlEB37ykXVh6P1mJh0T/xGQDBlK0P4gt6dTMzcb7Wi58KduZ1AoF6XyiZT0aGi5Zj/G+U5NbNZ68Knu0GxxFj94oIf/Ly93apFUqMw7Td5kykDpguGKaaY3VXhmo+mfZzM8TcIT3DCaPuI7pWOwDzag1Xmsj2sdR0dXicPWjKvWu227izYjdz9Q1SNc1VNgsH30jfp3B8+CXRtXIVtklIbN4uZy/Rii2qvglgdeNhQ= X-Forefront-Antispam-Report: CIP:216.228.112.35;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid02.nvidia.com;CAT:NONE;SFS:(4636009)(36840700001)(46966006)(7696005)(186003)(6636002)(316002)(8676002)(70206006)(356005)(1076003)(70586007)(83380400001)(36906005)(508600001)(30864003)(336012)(54906003)(5660300002)(426003)(110136005)(86362001)(4326008)(36756003)(2616005)(26005)(8936002)(36860700001)(7636003)(6666004)(47076005)(107886003)(2906002)(82310400003)(21314003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Oct 2021 09:58:29.0056 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7078f3f2-3b4c-4ff2-a11b-08d9993053c3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.35];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1201MB0046 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds support for vfio_pci driver for mlx5 devices. It uses vfio_pci_core to register to the VFIO subsystem and then implements the mlx5 specific logic in the migration area. The migration implementation follows the definition from uapi/vfio.h and uses the mlx5 VF->PF command channel to achieve it. This patch implements the suspend/resume flows. Signed-off-by: Yishai Hadas Signed-off-by: Leon Romanovsky --- MAINTAINERS | 6 + drivers/vfio/pci/Kconfig | 3 + drivers/vfio/pci/Makefile | 2 + drivers/vfio/pci/mlx5/Kconfig | 10 + drivers/vfio/pci/mlx5/Makefile | 4 + drivers/vfio/pci/mlx5/main.c | 698 +++++++++++++++++++++++++++++++++ 6 files changed, 723 insertions(+) create mode 100644 drivers/vfio/pci/mlx5/Kconfig create mode 100644 drivers/vfio/pci/mlx5/Makefile create mode 100644 drivers/vfio/pci/mlx5/main.c diff --git a/MAINTAINERS b/MAINTAINERS index 8d118d7957d2..3689d452b0ed 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19703,6 +19703,12 @@ L: kvm@vger.kernel.org S: Maintained F: drivers/vfio/platform/ +VFIO MLX5 PCI DRIVER +M: Yishai Hadas +L: kvm@vger.kernel.org +S: Maintained +F: drivers/vfio/pci/mlx5/ + VGA_SWITCHEROO R: Lukas Wunner S: Maintained diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig index 860424ccda1b..187b9c259944 100644 --- a/drivers/vfio/pci/Kconfig +++ b/drivers/vfio/pci/Kconfig @@ -43,4 +43,7 @@ config VFIO_PCI_IGD To enable Intel IGD assignment through vfio-pci, say Y. endif + +source "drivers/vfio/pci/mlx5/Kconfig" + endif diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile index 349d68d242b4..ed9d6f2e0555 100644 --- a/drivers/vfio/pci/Makefile +++ b/drivers/vfio/pci/Makefile @@ -7,3 +7,5 @@ obj-$(CONFIG_VFIO_PCI_CORE) += vfio-pci-core.o vfio-pci-y := vfio_pci.o vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o obj-$(CONFIG_VFIO_PCI) += vfio-pci.o + +obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5/ diff --git a/drivers/vfio/pci/mlx5/Kconfig b/drivers/vfio/pci/mlx5/Kconfig new file mode 100644 index 000000000000..7088edc4fb28 --- /dev/null +++ b/drivers/vfio/pci/mlx5/Kconfig @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: GPL-2.0-only +config MLX5_VFIO_PCI + tristate "VFIO support for MLX5 PCI devices" + depends on MLX5_CORE + select VFIO_PCI_CORE + help + This provides migration support for MLX5 devices using the VFIO + framework. + + If you don't know what to do here, say N. diff --git a/drivers/vfio/pci/mlx5/Makefile b/drivers/vfio/pci/mlx5/Makefile new file mode 100644 index 000000000000..689627da7ff5 --- /dev/null +++ b/drivers/vfio/pci/mlx5/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5-vfio-pci.o +mlx5-vfio-pci-y := main.o cmd.o + diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c new file mode 100644 index 000000000000..467dee08ad77 --- /dev/null +++ b/drivers/vfio/pci/mlx5/main.c @@ -0,0 +1,698 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "cmd.h" + +enum { + MLX5VF_PCI_FREEZED = 1 << 0, +}; + +enum { + MLX5VF_REGION_PENDING_BYTES = 1 << 0, + MLX5VF_REGION_DATA_SIZE = 1 << 1, +}; + +enum { + MLX5VF_SUPPORTED_DEVICE_STATES = VFIO_DEVICE_STATE_RUNNING | + VFIO_DEVICE_STATE_SAVING | + VFIO_DEVICE_STATE_RESUMING, +}; + +#define MLX5VF_MIG_REGION_DATA_SIZE SZ_128K +/* Data section offset from migration region */ +#define MLX5VF_MIG_REGION_DATA_OFFSET \ + (sizeof(struct vfio_device_migration_info)) + +#define VFIO_DEVICE_MIGRATION_OFFSET(x) \ + (offsetof(struct vfio_device_migration_info, x)) + +struct mlx5vf_pci_migration_info { + u32 vfio_dev_state; /* VFIO_DEVICE_STATE_XXX */ + u32 dev_state; /* device migration state */ + u32 region_state; /* Use MLX5VF_REGION_XXX */ + u16 vhca_id; + struct mlx5_vhca_state_data vhca_state_data; +}; + +struct mlx5vf_pci_core_device { + struct vfio_pci_core_device core_device; + u8 migrate_cap:1; + /* protect migration state */ + struct mutex state_mutex; + struct mlx5vf_pci_migration_info vmig; +}; + +static int mlx5vf_pci_unquiesce_device(struct mlx5vf_pci_core_device *mvdev) +{ + return mlx5vf_cmd_resume_vhca(mvdev->core_device.pdev, + mvdev->vmig.vhca_id, + MLX5_RESUME_VHCA_IN_OP_MOD_RESUME_MASTER); +} + +static int mlx5vf_pci_quiesce_device(struct mlx5vf_pci_core_device *mvdev) +{ + return mlx5vf_cmd_suspend_vhca( + mvdev->core_device.pdev, mvdev->vmig.vhca_id, + MLX5_SUSPEND_VHCA_IN_OP_MOD_SUSPEND_MASTER); +} + +static int mlx5vf_pci_unfreeze_device(struct mlx5vf_pci_core_device *mvdev) +{ + int ret; + + ret = mlx5vf_cmd_resume_vhca(mvdev->core_device.pdev, + mvdev->vmig.vhca_id, + MLX5_RESUME_VHCA_IN_OP_MOD_RESUME_SLAVE); + if (ret) + return ret; + + mvdev->vmig.dev_state &= ~MLX5VF_PCI_FREEZED; + return 0; +} + +static int mlx5vf_pci_freeze_device(struct mlx5vf_pci_core_device *mvdev) +{ + int ret; + + ret = mlx5vf_cmd_suspend_vhca( + mvdev->core_device.pdev, mvdev->vmig.vhca_id, + MLX5_SUSPEND_VHCA_IN_OP_MOD_SUSPEND_SLAVE); + if (ret) + return ret; + + mvdev->vmig.dev_state |= MLX5VF_PCI_FREEZED; + return 0; +} + +static int mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev) +{ + u32 state_size = 0; + int ret; + + if (!(mvdev->vmig.dev_state & MLX5VF_PCI_FREEZED)) + return -EFAULT; + + /* If we already read state no reason to re-read */ + if (mvdev->vmig.vhca_state_data.state_size) + return 0; + + ret = mlx5vf_cmd_query_vhca_migration_state( + mvdev->core_device.pdev, mvdev->vmig.vhca_id, &state_size); + if (ret) + return ret; + + return mlx5vf_cmd_save_vhca_state(mvdev->core_device.pdev, + mvdev->vmig.vhca_id, state_size, + &mvdev->vmig.vhca_state_data); +} + +static int mlx5vf_pci_new_write_window(struct mlx5vf_pci_core_device *mvdev) +{ + struct mlx5_vhca_state_data *state_data = &mvdev->vmig.vhca_state_data; + u32 num_pages_needed; + u64 allocated_ready; + u32 bytes_needed; + + /* Check how many bytes are available from previous flows */ + WARN_ON(state_data->num_pages * PAGE_SIZE < + state_data->win_start_offset); + allocated_ready = (state_data->num_pages * PAGE_SIZE) - + state_data->win_start_offset; + WARN_ON(allocated_ready > MLX5VF_MIG_REGION_DATA_SIZE); + + bytes_needed = MLX5VF_MIG_REGION_DATA_SIZE - allocated_ready; + if (!bytes_needed) + return 0; + + num_pages_needed = DIV_ROUND_UP_ULL(bytes_needed, PAGE_SIZE); + return mlx5vf_add_migration_pages(state_data, num_pages_needed); +} + +static ssize_t +mlx5vf_pci_handle_migration_data_size(struct mlx5vf_pci_core_device *mvdev, + char __user *buf, bool iswrite) +{ + struct mlx5vf_pci_migration_info *vmig = &mvdev->vmig; + u64 data_size; + int ret; + + if (iswrite) { + /* data_size is writable only during resuming state */ + if (vmig->vfio_dev_state != VFIO_DEVICE_STATE_RESUMING) + return -EINVAL; + + ret = copy_from_user(&data_size, buf, sizeof(data_size)); + if (ret) + return -EFAULT; + + vmig->vhca_state_data.state_size += data_size; + vmig->vhca_state_data.win_start_offset += data_size; + ret = mlx5vf_pci_new_write_window(mvdev); + if (ret) + return ret; + + } else { + if (vmig->vfio_dev_state != VFIO_DEVICE_STATE_SAVING) + return -EINVAL; + + data_size = min_t(u64, MLX5VF_MIG_REGION_DATA_SIZE, + vmig->vhca_state_data.state_size - + vmig->vhca_state_data.win_start_offset); + ret = copy_to_user(buf, &data_size, sizeof(data_size)); + if (ret) + return -EFAULT; + } + + vmig->region_state |= MLX5VF_REGION_DATA_SIZE; + return sizeof(data_size); +} + +static ssize_t +mlx5vf_pci_handle_migration_data_offset(struct mlx5vf_pci_core_device *mvdev, + char __user *buf, bool iswrite) +{ + static const u64 data_offset = MLX5VF_MIG_REGION_DATA_OFFSET; + int ret; + + /* RO field */ + if (iswrite) + return -EFAULT; + + ret = copy_to_user(buf, &data_offset, sizeof(data_offset)); + if (ret) + return -EFAULT; + + return sizeof(data_offset); +} + +static ssize_t +mlx5vf_pci_handle_migration_pending_bytes(struct mlx5vf_pci_core_device *mvdev, + char __user *buf, bool iswrite) +{ + struct mlx5vf_pci_migration_info *vmig = &mvdev->vmig; + u64 pending_bytes; + int ret; + + /* RO field */ + if (iswrite) + return -EFAULT; + + if (vmig->vfio_dev_state == (VFIO_DEVICE_STATE_SAVING | + VFIO_DEVICE_STATE_RUNNING)) { + /* + * In pre-copy state we have no data to return for now, + * return 0 pending bytes + */ + pending_bytes = 0; + } else { + if (!vmig->vhca_state_data.state_size) + return 0; + pending_bytes = vmig->vhca_state_data.state_size - + vmig->vhca_state_data.win_start_offset; + } + + ret = copy_to_user(buf, &pending_bytes, sizeof(pending_bytes)); + if (ret) + return -EFAULT; + + /* Window moves forward once data from previous iteration was read */ + if (vmig->region_state & MLX5VF_REGION_DATA_SIZE) + vmig->vhca_state_data.win_start_offset += + min_t(u64, MLX5VF_MIG_REGION_DATA_SIZE, pending_bytes); + + WARN_ON(vmig->vhca_state_data.win_start_offset > + vmig->vhca_state_data.state_size); + + /* New iteration started */ + vmig->region_state = MLX5VF_REGION_PENDING_BYTES; + return sizeof(pending_bytes); +} + +static int mlx5vf_load_state(struct mlx5vf_pci_core_device *mvdev) +{ + if (!mvdev->vmig.vhca_state_data.state_size) + return 0; + + return mlx5vf_cmd_load_vhca_state(mvdev->core_device.pdev, + mvdev->vmig.vhca_id, + &mvdev->vmig.vhca_state_data); +} + +static void mlx5vf_reset_mig_state(struct mlx5vf_pci_core_device *mvdev) +{ + struct mlx5vf_pci_migration_info *vmig = &mvdev->vmig; + + vmig->region_state = 0; + mlx5vf_reset_vhca_state(&vmig->vhca_state_data); +} + +static int mlx5vf_pci_set_device_state(struct mlx5vf_pci_core_device *mvdev, + u32 state) +{ + struct mlx5vf_pci_migration_info *vmig = &mvdev->vmig; + u32 old_state = vmig->vfio_dev_state; + u32 flipped_bits = old_state ^ state; + int ret = 0; + + if (old_state == VFIO_DEVICE_STATE_ERROR || + !VFIO_DEVICE_STATE_VALID(state) || + (state & ~MLX5VF_SUPPORTED_DEVICE_STATES)) + return -EINVAL; + + /* Running switches off */ + if ((flipped_bits & VFIO_DEVICE_STATE_RUNNING) && + !(state & VFIO_DEVICE_STATE_RUNNING)) { + ret = mlx5vf_pci_quiesce_device(mvdev); + if (ret) + return ret; + ret = mlx5vf_pci_freeze_device(mvdev); + if (ret) { + if (mlx5vf_pci_unquiesce_device(mvdev)) + vmig->vfio_dev_state = VFIO_DEVICE_STATE_ERROR; + return ret; + } + } + + /* Saving switches on and not running */ + if ((flipped_bits & + (VFIO_DEVICE_STATE_RUNNING | VFIO_DEVICE_STATE_SAVING)) && + ((state & (VFIO_DEVICE_STATE_RUNNING | + VFIO_DEVICE_STATE_SAVING)) == VFIO_DEVICE_STATE_SAVING)) { + ret = mlx5vf_pci_save_device_data(mvdev); + if (ret) + return ret; + } + + /* Resuming switches on */ + if ((flipped_bits & VFIO_DEVICE_STATE_RESUMING) && + (state & VFIO_DEVICE_STATE_RESUMING)) { + mlx5vf_reset_mig_state(mvdev); + ret = mlx5vf_pci_new_write_window(mvdev); + if (ret) + return ret; + } + + /* Resuming switches off */ + if ((flipped_bits & VFIO_DEVICE_STATE_RESUMING) && + !(state & VFIO_DEVICE_STATE_RESUMING)) { + /* deserialize state into the device */ + ret = mlx5vf_load_state(mvdev); + if (ret) { + vmig->vfio_dev_state = VFIO_DEVICE_STATE_ERROR; + return ret; + } + } + + /* Running switches on */ + if ((flipped_bits & VFIO_DEVICE_STATE_RUNNING) && + (state & VFIO_DEVICE_STATE_RUNNING)) { + ret = mlx5vf_pci_unfreeze_device(mvdev); + if (ret) + return ret; + ret = mlx5vf_pci_unquiesce_device(mvdev); + if (ret) { + vmig->vfio_dev_state = VFIO_DEVICE_STATE_ERROR; + return ret; + } + } + + vmig->vfio_dev_state = state; + return 0; +} + +static ssize_t +mlx5vf_pci_handle_migration_device_state(struct mlx5vf_pci_core_device *mvdev, + char __user *buf, bool iswrite) +{ + size_t count = sizeof(mvdev->vmig.vfio_dev_state); + int ret; + + if (iswrite) { + u32 device_state; + + ret = copy_from_user(&device_state, buf, count); + if (ret) + return -EFAULT; + + ret = mlx5vf_pci_set_device_state(mvdev, device_state); + if (ret) + return ret; + } else { + ret = copy_to_user(buf, &mvdev->vmig.vfio_dev_state, count); + if (ret) + return -EFAULT; + } + + return count; +} + +static ssize_t +mlx5vf_pci_copy_user_data_to_device_state(struct mlx5vf_pci_core_device *mvdev, + char __user *buf, size_t count, + u64 offset) +{ + struct mlx5_vhca_state_data *state_data = &mvdev->vmig.vhca_state_data; + char __user *from_buff = buf; + u32 curr_offset; + u32 win_page_offset; + u32 copy_count; + struct page *page; + char *to_buff; + int ret; + + curr_offset = state_data->win_start_offset + offset; + + do { + page = mlx5vf_get_migration_page(&state_data->mig_data, + curr_offset); + if (!page) + return -EINVAL; + + win_page_offset = curr_offset % PAGE_SIZE; + copy_count = min_t(u32, PAGE_SIZE - win_page_offset, count); + + to_buff = kmap_local_page(page); + ret = copy_from_user(to_buff + win_page_offset, from_buff, + copy_count); + kunmap_local(to_buff); + if (ret) + return -EFAULT; + + from_buff += copy_count; + curr_offset += copy_count; + count -= copy_count; + } while (count > 0); + + return 0; +} + +static ssize_t +mlx5vf_pci_copy_device_state_to_user(struct mlx5vf_pci_core_device *mvdev, + char __user *buf, u64 offset, size_t count) +{ + struct mlx5_vhca_state_data *state_data = &mvdev->vmig.vhca_state_data; + char __user *to_buff = buf; + u32 win_available_bytes; + u32 win_page_offset; + u32 copy_count; + u32 curr_offset; + char *from_buff; + struct page *page; + int ret; + + win_available_bytes = + min_t(u64, MLX5VF_MIG_REGION_DATA_SIZE, + mvdev->vmig.vhca_state_data.state_size - + mvdev->vmig.vhca_state_data.win_start_offset); + + if (count + offset > win_available_bytes) + return -EINVAL; + + curr_offset = state_data->win_start_offset + offset; + + do { + page = mlx5vf_get_migration_page(&state_data->mig_data, + curr_offset); + if (!page) + return -EINVAL; + + win_page_offset = curr_offset % PAGE_SIZE; + copy_count = min_t(u32, PAGE_SIZE - win_page_offset, count); + + from_buff = kmap_local_page(page); + ret = copy_to_user(buf, from_buff + win_page_offset, + copy_count); + kunmap_local(from_buff); + if (ret) + return -EFAULT; + + curr_offset += copy_count; + count -= copy_count; + to_buff += copy_count; + } while (count); + + return 0; +} + +static ssize_t +mlx5vf_pci_migration_data_rw(struct mlx5vf_pci_core_device *mvdev, + char __user *buf, size_t count, u64 offset, + bool iswrite) +{ + int ret; + + if (offset + count > MLX5VF_MIG_REGION_DATA_SIZE) + return -EINVAL; + + if (iswrite) + ret = mlx5vf_pci_copy_user_data_to_device_state(mvdev, buf, + count, offset); + else + ret = mlx5vf_pci_copy_device_state_to_user(mvdev, buf, offset, + count); + if (ret) + return ret; + return count; +} + +static ssize_t mlx5vf_pci_mig_rw(struct vfio_pci_core_device *vdev, + char __user *buf, size_t count, loff_t *ppos, + bool iswrite) +{ + struct mlx5vf_pci_core_device *mvdev = + container_of(vdev, struct mlx5vf_pci_core_device, core_device); + u64 pos = *ppos & VFIO_PCI_OFFSET_MASK; + int ret; + + mutex_lock(&mvdev->state_mutex); + /* Copy to/from the migration region data section */ + if (pos >= MLX5VF_MIG_REGION_DATA_OFFSET) { + ret = mlx5vf_pci_migration_data_rw( + mvdev, buf, count, pos - MLX5VF_MIG_REGION_DATA_OFFSET, + iswrite); + goto end; + } + + switch (pos) { + case VFIO_DEVICE_MIGRATION_OFFSET(device_state): + /* This is RW field. */ + if (count != sizeof(mvdev->vmig.vfio_dev_state)) { + ret = -EINVAL; + break; + } + ret = mlx5vf_pci_handle_migration_device_state(mvdev, buf, + iswrite); + break; + case VFIO_DEVICE_MIGRATION_OFFSET(pending_bytes): + /* + * The number of pending bytes still to be migrated from the + * vendor driver. This is RO field. + * Reading this field indicates on the start of a new iteration + * to get device data. + * + */ + ret = mlx5vf_pci_handle_migration_pending_bytes(mvdev, buf, + iswrite); + break; + case VFIO_DEVICE_MIGRATION_OFFSET(data_offset): + /* + * The user application should read data_offset field from the + * migration region. The user application should read the + * device data from this offset within the migration region + * during the _SAVING mode or write the device data during the + * _RESUMING mode. This is RO field. + */ + ret = mlx5vf_pci_handle_migration_data_offset(mvdev, buf, + iswrite); + break; + case VFIO_DEVICE_MIGRATION_OFFSET(data_size): + /* + * The user application should read data_size to get the size + * in bytes of the data copied to the migration region during + * the _SAVING state by the device. The user application should + * write the size in bytes of the data that was copied to + * the migration region during the _RESUMING state by the user. + * This is RW field. + */ + ret = mlx5vf_pci_handle_migration_data_size(mvdev, buf, + iswrite); + break; + default: + ret = -EFAULT; + break; + } + +end: + mutex_unlock(&mvdev->state_mutex); + return ret; +} + +static struct vfio_pci_regops migration_ops = { + .rw = mlx5vf_pci_mig_rw, +}; + +static int mlx5vf_pci_open_device(struct vfio_device *core_vdev) +{ + struct mlx5vf_pci_core_device *mvdev = container_of( + core_vdev, struct mlx5vf_pci_core_device, core_device.vdev); + struct vfio_pci_core_device *vdev = &mvdev->core_device; + int vf_id; + int ret; + + ret = vfio_pci_core_enable(vdev); + if (ret) + return ret; + + if (!mvdev->migrate_cap) { + vfio_pci_core_finish_enable(vdev); + return 0; + } + + vf_id = pci_iov_vf_id(vdev->pdev); + if (vf_id < 0) { + ret = vf_id; + goto out_disable; + } + + ret = mlx5vf_cmd_get_vhca_id(vdev->pdev, vf_id + 1, + &mvdev->vmig.vhca_id); + if (ret) + goto out_disable; + + ret = vfio_pci_register_dev_region(vdev, VFIO_REGION_TYPE_MIGRATION, + VFIO_REGION_SUBTYPE_MIGRATION, + &migration_ops, + MLX5VF_MIG_REGION_DATA_OFFSET + + MLX5VF_MIG_REGION_DATA_SIZE, + VFIO_REGION_INFO_FLAG_READ | + VFIO_REGION_INFO_FLAG_WRITE, + NULL); + if (ret) + goto out_disable; + + mvdev->vmig.vfio_dev_state = VFIO_DEVICE_STATE_RUNNING; + vfio_pci_core_finish_enable(vdev); + return 0; +out_disable: + vfio_pci_core_disable(vdev); + return ret; +} + +static void mlx5vf_pci_close_device(struct vfio_device *core_vdev) +{ + struct mlx5vf_pci_core_device *mvdev = container_of( + core_vdev, struct mlx5vf_pci_core_device, core_device.vdev); + + vfio_pci_core_close_device(core_vdev); + mlx5vf_reset_mig_state(mvdev); +} + +static const struct vfio_device_ops mlx5vf_pci_ops = { + .name = "mlx5-vfio-pci", + .open_device = mlx5vf_pci_open_device, + .close_device = mlx5vf_pci_close_device, + .ioctl = vfio_pci_core_ioctl, + .read = vfio_pci_core_read, + .write = vfio_pci_core_write, + .mmap = vfio_pci_core_mmap, + .request = vfio_pci_core_request, + .match = vfio_pci_core_match, +}; + +static int mlx5vf_pci_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + struct mlx5vf_pci_core_device *mvdev; + int ret; + + mvdev = kzalloc(sizeof(*mvdev), GFP_KERNEL); + if (!mvdev) + return -ENOMEM; + vfio_pci_core_init_device(&mvdev->core_device, pdev, &mlx5vf_pci_ops); + + if (pdev->is_virtfn) { + struct mlx5_core_dev *mdev = + mlx5_vf_get_core_dev(pdev); + + if (mdev) { + if (MLX5_CAP_GEN(mdev, migration)) { + mvdev->migrate_cap = 1; + mutex_init(&mvdev->state_mutex); + } + mlx5_vf_put_core_dev(mdev); + } + } + + ret = vfio_pci_core_register_device(&mvdev->core_device); + if (ret) + goto out_free; + + dev_set_drvdata(&pdev->dev, mvdev); + return 0; + +out_free: + vfio_pci_core_uninit_device(&mvdev->core_device); + kfree(mvdev); + return ret; +} + +static void mlx5vf_pci_remove(struct pci_dev *pdev) +{ + struct mlx5vf_pci_core_device *mvdev = dev_get_drvdata(&pdev->dev); + + vfio_pci_core_unregister_device(&mvdev->core_device); + vfio_pci_core_uninit_device(&mvdev->core_device); + kfree(mvdev); +} + +static const struct pci_device_id mlx5vf_pci_table[] = { + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_MELLANOX, 0x101e) }, /* ConnectX Family mlx5Gen Virtual Function */ + {} +}; + +MODULE_DEVICE_TABLE(pci, mlx5vf_pci_table); + +static struct pci_driver mlx5vf_pci_driver = { + .name = KBUILD_MODNAME, + .id_table = mlx5vf_pci_table, + .probe = mlx5vf_pci_probe, + .remove = mlx5vf_pci_remove, + .err_handler = &vfio_pci_core_err_handlers, +}; + +static void __exit mlx5vf_pci_cleanup(void) +{ + pci_unregister_driver(&mlx5vf_pci_driver); +} + +static int __init mlx5vf_pci_init(void) +{ + return pci_register_driver(&mlx5vf_pci_driver); +} + +module_init(mlx5vf_pci_init); +module_exit(mlx5vf_pci_cleanup); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Max Gurtovoy "); +MODULE_AUTHOR("Yishai Hadas "); +MODULE_DESCRIPTION( + "MLX5 VFIO PCI - User Level meta-driver for MLX5 device family");