From patchwork Tue Oct 17 13:42:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 13425181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2699DCDB483 for ; Tue, 17 Oct 2023 13:43:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234605AbjJQNnM (ORCPT ); Tue, 17 Oct 2023 09:43:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343667AbjJQNnL (ORCPT ); Tue, 17 Oct 2023 09:43:11 -0400 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2068.outbound.protection.outlook.com [40.107.92.68]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02D32F9 for ; Tue, 17 Oct 2023 06:43:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IpFmKXzdX4HWd7icPj0fHz3cncm33lneQ80+Pa2qZjaQtdpf4kY1RUdiY36BZ20OLZyO0bdQ3Tw2NnPaq5LCcarXXUhMt+DtFvd1PLJZMNO/3TZsg4ev85hfbJXKJs6XbIlUca6mKZA3bANVz9r32GTO6p2dx5xmlnYX7+rFnUFBmGJq2K5138CymgMXVKTLtkFwwkT4USl4A/E6GrSpjljjydlq7fuVT/grc1E8YKy03ALo1tFHfwhlRdWwBWpfzuzcORd1HknPP5stiNydr2mKgEF4UwQOgcye8F4F6R/RMJD/JxunGzAqtZZ4kAozMyDUYS/yu4Nv91anu7NQ6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OEvrPSQf+U4XaJyHIss93E4xxxdA4cdf3/34u+GHgiU=; b=ZwJASs6GP0AFVB8iQPPs+H6CCX5AdXau5CnuAbV5g/Y+7pRvT768A+fIz7heOytsk3I0xGRVv9L6FVdoe7WfRflqXjYqjoan2xT3g/kpjN1JEycfotzjPzfZycuDdx2ppi4BqIei9EBwrdy5Y1/3Mw14mB7avLQsPSCCOl+ttn03/DBSyDVj7n8QhOPe8IIQ5fNkFpM6aev/WTeKJrPmKee2Nd4iohsAYvlmE4JyoGmXQharwHEW6+4ytDWI93DSlAL/yVO50kn2wZfeejoVKRPGXp6gylMnsW5Cfc4LSivhOMMaUC36oS8nYfviLnnPjvMij0UKX2JK8YkAf9rTYg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OEvrPSQf+U4XaJyHIss93E4xxxdA4cdf3/34u+GHgiU=; b=SIPSxRWryDKApiW8IG58KSSzsY9zsT5xoubBcUhIr0WJ2Dt1qMqblh93bNTiKGQrc90REPtCBdLYgOpsPPRWeXWlF9RNRgTgXROlEJMWUFwwb9n07Git3v6f8+at20yu6S2iQXkDAhtxgD0MVt9OGiGLVDXMJkmUjpRY71lV4UFBPUPRuxixXntS+BBWf8c8meGcspxynWygQtvvqEStEB6v/PG8Mn7ET/Ck186WfqJfYyrdCHl3N/1bm2hdF7QGgFUKnzlThiNz3auLQSh/O+AuKBm3JUJ0+2L6Sp7YsBqOh+HH2RhtI05O+RpySQnnL+Mneu5F39fokmE4xUN9MA== Received: from BN9PR03CA0740.namprd03.prod.outlook.com (2603:10b6:408:110::25) by MW4PR12MB7439.namprd12.prod.outlook.com (2603:10b6:303:22b::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.35; Tue, 17 Oct 2023 13:43:04 +0000 Received: from SA2PEPF00001507.namprd04.prod.outlook.com (2603:10b6:408:110:cafe::61) by BN9PR03CA0740.outlook.office365.com (2603:10b6:408:110::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.35 via Frontend Transport; Tue, 17 Oct 2023 13:43:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SA2PEPF00001507.mail.protection.outlook.com (10.167.242.39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Tue, 17 Oct 2023 13:43:02 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:42:46 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:42:45 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.41 via Frontend Transport; Tue, 17 Oct 2023 06:42:42 -0700 From: Yishai Hadas To: , , , CC: , , , , , , , , , , Subject: [PATCH V1 vfio 1/9] virtio-pci: Fix common config map for modern device Date: Tue, 17 Oct 2023 16:42:09 +0300 Message-ID: <20231017134217.82497-2-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20231017134217.82497-1-yishaih@nvidia.com> References: <20231017134217.82497-1-yishaih@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF00001507:EE_|MW4PR12MB7439:EE_ X-MS-Office365-Filtering-Correlation-Id: a203440c-1c07-49e5-2ab9-08dbcf16fbe0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0n1XQwx98rOS3r3rk25rnrwfmAiOuqgNt1LqiOqyishpClZAeQkZohhUKqYfs0iw4WPN/JhXB+CcgE2ONJcNAaBfngd4aCb050VOvCEm9WLW8YvKL8gn1UaKxKJwZivtBC8BGsinRbu5+3CrAYDIhnZcnm9+A6XV48FYeczjU6xezgviMHZHhzho17eFpc3tB1Dsu3m7OKbi+K6apwKRf+UhSj9XSDcj6FggXSEWEe0l+9g6C3uc556xmnrtyvxKQvIBvewx924dy1xrQ6zFbvNL1LX8CJUGffEtk9mg99Wmp4zGxyBWguFuphfIiV9Le1ltNFDSjI9btWX4myaOj8cCoO06ysQFY/rwFLnUuqaUE6BoK/pGR38BOEq6/L1LwwP013T+TlCbWlWHltsRk29YdNCdJEGDxvMaGtPISttqL5wL2xPmWN9jfXHrNE6PR4AT/u2GQQ3Q5bAIRi/yHvFMzK1FGWsWbgUomPqlVAbpLb8L4udi66r6tbr/K2d3tguzHXlgJWw1D7nBv9izBfjRr66uD+iSj3dnIK/7WJF5wAk1iV4/OGWRDHg0JAl7ZGK9nhoDgPrmlRmWGIocChyQbzxgUMsVcCSjkghD7ghc+B7WJoHy2iZm6sVk1K/ugJtF+c9HFd0X92Gc42kUCEVP9eAEWITK1tZYovfC5EZYDwrPH0YNDYFmbBq2ZkdA2X1+ojLWgQws4XR6bRMIJOSJUFe6BS1VphzaG+RQ9eH9wKVFdS4s78emrLDj7e4jSdOd31rjAROqNkJ17LzM9Acj9vXvuJBxKmOVzlnIKZE= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(376002)(396003)(136003)(346002)(39860400002)(230922051799003)(451199024)(186009)(1800799009)(64100799003)(82310400011)(40470700004)(46966006)(36840700001)(336012)(26005)(2906002)(5660300002)(41300700001)(8936002)(8676002)(4326008)(66899024)(40460700003)(70586007)(6636002)(316002)(70206006)(110136005)(478600001)(966005)(6666004)(7696005)(426003)(36756003)(83380400001)(40480700001)(1076003)(54906003)(86362001)(107886003)(2616005)(36860700001)(356005)(82740400003)(47076005)(7636003)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2023 13:43:02.3653 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a203440c-1c07-49e5-2ab9-08dbcf16fbe0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF00001507.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7439 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Feng Liu Currently vp_modern_probe() missed out to map config space structure starting from notify_data offset. Due to this when such structure elements are accessed it can result in an error. Fix it by considering the minimum size of what device has offered and what driver will access. Fixes: ea024594b1dc ("virtio_pci: struct virtio_pci_common_cfg add queue_notify_data") Fixes: 0cdd450e7051 ("virtio_pci: struct virtio_pci_common_cfg add queue_reset") Signed-off-by: Feng Liu Reported-by: Michael S . Tsirkin Closes: https://lkml.kernel.org/kvm/20230927172553-mutt-send-email-mst@kernel.org/ Reviewed-by: Parav Pandit Signed-off-by: Yishai Hadas --- drivers/virtio/virtio_pci_modern_dev.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c index aad7d9296e77..7fa70d7c8146 100644 --- a/drivers/virtio/virtio_pci_modern_dev.c +++ b/drivers/virtio/virtio_pci_modern_dev.c @@ -290,9 +290,9 @@ int vp_modern_probe(struct virtio_pci_modern_device *mdev) err = -EINVAL; mdev->common = vp_modern_map_capability(mdev, common, - sizeof(struct virtio_pci_common_cfg), 4, - 0, sizeof(struct virtio_pci_common_cfg), - NULL, NULL); + sizeof(struct virtio_pci_common_cfg), 4, + 0, sizeof(struct virtio_pci_modern_common_cfg), + NULL, NULL); if (!mdev->common) goto err_map_common; mdev->isr = vp_modern_map_capability(mdev, isr, sizeof(u8), 1, From patchwork Tue Oct 17 13:42:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 13425182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31CB8C41513 for ; Tue, 17 Oct 2023 13:43:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343935AbjJQNnN (ORCPT ); Tue, 17 Oct 2023 09:43:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343910AbjJQNnL (ORCPT ); Tue, 17 Oct 2023 09:43:11 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2058.outbound.protection.outlook.com [40.107.94.58]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C695FD for ; Tue, 17 Oct 2023 06:43:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gIeYf8cl1DhCI/MhYTPnLyW00nVGx6+v6D6NbKxyAEZwkwSkjAoHoBV6DN8rWrbalT7J8JJJxuKZ6CPp7YIdrbrWlWeteC+DIGP27e8lyYIJCdeZYbqbiKksChDewHPD6TiVqwTK1gDAgG3Arlz3fa8qkBs7W6/texAmKsh1See8YVved9VxMgtbWhHIlq18dsEu04Uw4ev/+mTEaDjCNdZnwuHZHrpIjvKURG2z+kpofR1X+UInxoB6ykE0UQa7psd76T89K+pCUhnvyax5QikvVUPRgmk2w5y9OPFq1moB5RXnmQp7YMAL6CSOZBPT6UCnev9ID5fgLv6POj2Wag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZK8oVP00vpc6UQUa3hMRt2UjQkZntth7M63FfSSA9Po=; b=KRBkW7m/HTNVfPDyfDJI8rmLnfKtIj72K4ix73L7G+r03QJSAT6Mv3+yEHGDa1Jrbb3+xGBAoc9MmE66uAYtz1Gstm5JxMfc3rBKaN8x9eFSUmilikW0mKBQnjVl4bTm6B+F5awiXME6HOClK6FNgqNErFlbpXeEalC4+WBX6Lfz+brXmg+KnQ0LQiq5mFV6ieum44gTe+qEnv7jofpwSw6Gec37Wdk0bZCp4YvfxmJFqd+fDkGLpO3h/N2i7atQ2nB6I3LBZJFIMNsiXp9IkRJIKoHCPaoqPaLAprNt3P4jlB3cLeDYo73sgukSMARINxnwW4JleaRucTO2955xBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZK8oVP00vpc6UQUa3hMRt2UjQkZntth7M63FfSSA9Po=; b=aL1mk3gm6jmFYZtE25jOYA5mSCQPWm+z9/WtMj2op4ELU0ARrw/g9hPhPyipCE4J8QG511B4It7g/9AtUuBui6fp+VoHxnY4AWsC9/ve6ZSmxEuOqLw1igGIOv/oi9KX6CxYWPWbtY3+qiNzhaCBVY2MR3UZhlmgEzw9unl6WS3Guvun1SXSRL7ay4xHRPYYWc/rpcD103e1KcQOJoSUp/VpDw+If/OvPK9H3BJw9F4HgrIQZSwOToMKlfMIxfBt+T4z1xuxB8b/uI70PPi5LdCMnXBL0ho7/dpGLMt2pp54S8jUnBJolROoyRlrS6YmU9JcA1AP3o7oxFI5oMv1aw== Received: from SN6PR04CA0080.namprd04.prod.outlook.com (2603:10b6:805:f2::21) by BY5PR12MB4209.namprd12.prod.outlook.com (2603:10b6:a03:20d::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.38; Tue, 17 Oct 2023 13:43:06 +0000 Received: from SA2PEPF0000150B.namprd04.prod.outlook.com (2603:10b6:805:f2:cafe::23) by SN6PR04CA0080.outlook.office365.com (2603:10b6:805:f2::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.21 via Frontend Transport; Tue, 17 Oct 2023 13:43:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SA2PEPF0000150B.mail.protection.outlook.com (10.167.242.43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Tue, 17 Oct 2023 13:43:05 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:42:50 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:42:49 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.41 via Frontend Transport; Tue, 17 Oct 2023 06:42:46 -0700 From: Yishai Hadas To: , , , CC: , , , , , , , , , , Subject: [PATCH V1 vfio 2/9] virtio: Define feature bit for administration virtqueue Date: Tue, 17 Oct 2023 16:42:10 +0300 Message-ID: <20231017134217.82497-3-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20231017134217.82497-1-yishaih@nvidia.com> References: <20231017134217.82497-1-yishaih@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF0000150B:EE_|BY5PR12MB4209:EE_ X-MS-Office365-Filtering-Correlation-Id: 9a9dad0c-7a6b-4158-3929-08dbcf16fde0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UiS20Yi8RvMnrmuDMi0sVeiKyXRtytGrPfHgIYyKA40tzfKhh0aaJ++/smYA07x98zjMxSHomHc5ek9kCBC3IgNTRoUCBTnCLbRTHXlFTXvd40MoVRs/UnAOxQYJoSibZeDrOostQlu+gcdWz+oQr7XV98JoQ08HG+HsV6MZm2lz3OwFvHXg2thXcNE8+AyiVgy4UkRm3LlXf1n3lNMkq/4lQrkqCqHjpzqcI4Qcf1jDHSZAIxE3gPrvF/sSjOwFTY3d1CKfvt/J42NiCcDbxnMiCgrDOeYPmc65xNwms0zFcTDroVzcXtbnB5onyXs3u0hVRWdwOXKmlS6abZ79NC/m5iLjeuYuZoR0bM1re8esq548c6Asub5IMBW36vWb47qVsfORbnWxG+C15A65uRQePVVQGrb+f8JSVC3bi/iVgsxZB/tyY3vy1ddgW9EzBh4pFj+5wn+pl2rQSrwUrr+1a6/XZGcEHE4ED5vY9IT32rjeK3YbxTWmMFmyAjPz+uNq5ZFgriRur/DCsggpgSfhTutES4iO85nBYWTxcYJuAksxo4ElKnhqYkcNdY/+NLWKGQ10TYaRB2PapL8BCSk1NzG7drxNJEB4ESUUqB5UqwiZ18cpYiVvDHqte0D2PKaz8EJfwUt05vuAPrvUVVw0RIwxthSZaJ6FBCoPY2px5h0DfMxH/ExD8JubLuEOKZdqhljYq+IpULkw042LPlEGpVaWLmaWqzB8o7orVCUTsY48kX+LDmQt8rR8Fi6bDA1YnHxEJv4SPNtEAJw64g== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(346002)(136003)(396003)(376002)(39860400002)(230922051799003)(186009)(82310400011)(451199024)(64100799003)(1800799009)(46966006)(36840700001)(40470700004)(40480700001)(2616005)(336012)(26005)(426003)(1076003)(54906003)(107886003)(6666004)(8936002)(47076005)(8676002)(41300700001)(478600001)(4326008)(316002)(110136005)(6636002)(70206006)(5660300002)(70586007)(7636003)(356005)(36756003)(40460700003)(82740400003)(86362001)(7696005)(83380400001)(36860700001)(2906002)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2023 13:43:05.7211 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9a9dad0c-7a6b-4158-3929-08dbcf16fde0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF0000150B.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4209 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Feng Liu Introduce VIRTIO_F_ADMIN_VQ which is used for administration virtqueue support. Signed-off-by: Feng Liu Reviewed-by: Parav Pandit Reviewed-by: Jiri Pirko Signed-off-by: Yishai Hadas --- include/uapi/linux/virtio_config.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h index 2c712c654165..09d694968b14 100644 --- a/include/uapi/linux/virtio_config.h +++ b/include/uapi/linux/virtio_config.h @@ -52,7 +52,7 @@ * rest are per-device feature bits. */ #define VIRTIO_TRANSPORT_F_START 28 -#define VIRTIO_TRANSPORT_F_END 41 +#define VIRTIO_TRANSPORT_F_END 42 #ifndef VIRTIO_CONFIG_NO_LEGACY /* Do we get callbacks when the ring is completely used, even if we've @@ -109,4 +109,10 @@ * This feature indicates that the driver can reset a queue individually. */ #define VIRTIO_F_RING_RESET 40 + +/* + * This feature indicates that the device support administration virtqueues. + */ +#define VIRTIO_F_ADMIN_VQ 41 + #endif /* _UAPI_LINUX_VIRTIO_CONFIG_H */ From patchwork Tue Oct 17 13:42:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 13425183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8E4FCDB474 for ; Tue, 17 Oct 2023 13:43:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343950AbjJQNnP (ORCPT ); Tue, 17 Oct 2023 09:43:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343839AbjJQNnN (ORCPT ); Tue, 17 Oct 2023 09:43:13 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2058.outbound.protection.outlook.com [40.107.220.58]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 928DEF2 for ; Tue, 17 Oct 2023 06:43:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nmXRWhSOo8xC6lMRX31n759o8mSfXqqQuRvtd+vaaG2t7Pbb1kalGCI6Y7PtPO+3Mj9CgWMW9JtleR5JXiHDD+j1K2Wto8ZC7xX+auqXg/ydtwGBK5axkEw7jqlZi9IxXWGGrauBn5+Y8wZJUhHN393gJ/HAogwt/TvbMt5QfES8dQvwUjFLunDFXtmL7m0VDWIarf5TwncUXjzYluKlzlxtZinTLJzMHa1x+3vsREucu+P6223c5t2WzVRBFo40hrxH2Ye6hAcjl3vxDecUjZYCQ2ahSLSSFUOtLWzicmG//VAcoDVsmx0xF5pZJZF3Mb5gV8fiJAtUVavTgP6mJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=F3TmeCZouup/mdxqBPtO8EKG/rC6K6Kpbqo587Uue6o=; b=DkW8Vx5WXpVh7MB9OO1PtwawrhR06ElhdykaFUJhu+4YFPTcU2aFEQ5tWPvTGJF32oqkHdsC222v9Y3BdgbTbmu+F5D+K3jUUJa99nFhY6nN3DZVbv2NDHHUNd8gVbf0V9STUpwUu/JQgbUIHagICRzAq8x3m4lJzet1RCWjdivl9MJKqKEP22Vq4OPwg/cKTsdH6UWm6UInLwnEDuv7Gha6fabrUsVbjOtiCZP9FOfkwUl+DttnzlBvHFBonBu4gLd4shYzRs6bf6KJVoo23IsB6mnx/yGiffudm11vXZkht6QXWUc+jUzqFp/bneyjKUUzsDdyZIZ7E/4dBohNTg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=F3TmeCZouup/mdxqBPtO8EKG/rC6K6Kpbqo587Uue6o=; b=pDGFoDbEC0sZJK+Iw/nQc2xUmP43R9UDLjlsdGx39AC5GRF0AMTXUZE4IkN8sRfeDQST/O5BWZTVw5Xt+cbXV0Oyk1qYsOFKZFBx7uaxvSbOWJ2vn9mLRtBxyUPKpzi4x0g1Fn78bfNUZqzE21LXRibzY3OwlRBeoLpK+5Dfy+6l2rCkoBQ0IIJ2LEKUMuUjMA3QzCTENEAVFMnXKPFCrMye4xgVd3m9jon64e+o4VROLryfydxtRnY6S/zKTWbYPlrQ2uiUwY/fC30MyYxracEdJb8eSWQrN+F+LG785YngAOsilTddUBzENRVJqLYsMN2vWPlYSXEVG6WvgKmBlQ== Received: from MN2PR20CA0061.namprd20.prod.outlook.com (2603:10b6:208:235::30) by IA0PR12MB8207.namprd12.prod.outlook.com (2603:10b6:208:401::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.35; Tue, 17 Oct 2023 13:43:09 +0000 Received: from BL02EPF0001A108.namprd05.prod.outlook.com (2603:10b6:208:235:cafe::ce) by MN2PR20CA0061.outlook.office365.com (2603:10b6:208:235::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.37 via Frontend Transport; Tue, 17 Oct 2023 13:43:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BL02EPF0001A108.mail.protection.outlook.com (10.167.241.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Tue, 17 Oct 2023 13:43:08 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:42:54 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:42:53 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.41 via Frontend Transport; Tue, 17 Oct 2023 06:42:50 -0700 From: Yishai Hadas To: , , , CC: , , , , , , , , , , Subject: [PATCH V1 vfio 3/9] virtio-pci: Introduce admin virtqueue Date: Tue, 17 Oct 2023 16:42:11 +0300 Message-ID: <20231017134217.82497-4-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20231017134217.82497-1-yishaih@nvidia.com> References: <20231017134217.82497-1-yishaih@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0001A108:EE_|IA0PR12MB8207:EE_ X-MS-Office365-Filtering-Correlation-Id: b7ba8629-9765-4445-c698-08dbcf16ffc7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rtQ4oWz1e0Y3lksEXm+Tohs1t64UaikLD6wsz6dw28pTMWSpNF3RUPUm4q7d+coyjdLR49M6RDfMlHJMMEOO4tNs3B0/9bXRVIRO8YVsmpLSfv5PrHFGOhthwHINa9TW43aFvNwTthBBS0ev05YNP8wdKJOOYcvnrsCaxGgybaa4uLx9DjKomTb7BRpoWI3WCDpXCSJEpG+dI9EopickhNnau8heSLwvw6RUTpKzI6iHxGOzjmBoA96JzUmH0OL4b6dcHGNz1D0Z0z1NFI2lx29jotBqdY3Xw84VjQb7UO7TJ5M1YbvYbGNwvZoADkAFNboSurWykVmb35wOv5SFfb7bpX9pg0wQFAvLo3ZSVwHAueb6xRvBpV3jxP7ytKmu08+tFyvV64DO8ZTUtU+n/jrpaaWHkyFZ32prQ3IcMZ3NrXrv1rF9d7SOV3HcwRZDgIGOl0uyxwGqYa8FIUpa3IeG4kQqtCEad0nlLgK9g8Yf5ZHX+ylPErjjiF7VnH5GtZEr3y9Qp9lLuM5AZJobJ1KG6Gg20sPWXZrG/LL8sLAIQTTY/xus6cVfu8XNjieebdGtoRPFrFfSsZAj7eeu3n/Z7pMcMCiPp1NcZJ1QDp7++eziA1Th6VkA3Vz3rFExqo6bu8jag+RxftNmk7yZe1igabEhvFzoPPczzPLaoXYEPtjn7HcKirdaFi5GPktX8IhMcWPYYYJEZwfo+L8zp7O9wMF5wWPQ0UNMLn32vRsWoCNxfOrceJvXwHMucrok X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(346002)(39860400002)(396003)(376002)(136003)(230922051799003)(451199024)(64100799003)(82310400011)(1800799009)(186009)(36840700001)(40470700004)(46966006)(336012)(40460700003)(40480700001)(82740400003)(36756003)(356005)(47076005)(83380400001)(36860700001)(6666004)(7636003)(26005)(7696005)(2906002)(54906003)(316002)(6636002)(70206006)(1076003)(426003)(2616005)(70586007)(478600001)(107886003)(110136005)(86362001)(41300700001)(8936002)(30864003)(8676002)(4326008)(5660300002)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2023 13:43:08.8538 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b7ba8629-9765-4445-c698-08dbcf16ffc7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0001A108.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8207 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Feng Liu Introduce support for the admin virtqueue. By negotiating VIRTIO_F_ADMIN_VQ feature, driver detects capability and creates one administration virtqueue. Administration virtqueue implementation in virtio pci generic layer, enables multiple types of upper layer drivers such as vfio, net, blk to utilize it. Signed-off-by: Feng Liu Reviewed-by: Parav Pandit Reviewed-by: Jiri Pirko Signed-off-by: Yishai Hadas --- drivers/virtio/virtio.c | 37 ++++++++++++++-- drivers/virtio/virtio_pci_common.c | 3 ++ drivers/virtio/virtio_pci_common.h | 15 ++++++- drivers/virtio/virtio_pci_modern.c | 61 +++++++++++++++++++++++++- drivers/virtio/virtio_pci_modern_dev.c | 18 ++++++++ include/linux/virtio_config.h | 4 ++ include/linux/virtio_pci_modern.h | 5 +++ 7 files changed, 137 insertions(+), 6 deletions(-) diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c index 3893dc29eb26..f4080692b351 100644 --- a/drivers/virtio/virtio.c +++ b/drivers/virtio/virtio.c @@ -302,9 +302,15 @@ static int virtio_dev_probe(struct device *_d) if (err) goto err; + if (dev->config->create_avq) { + err = dev->config->create_avq(dev); + if (err) + goto err; + } + err = drv->probe(dev); if (err) - goto err; + goto err_probe; /* If probe didn't do it, mark device DRIVER_OK ourselves. */ if (!(dev->config->get_status(dev) & VIRTIO_CONFIG_S_DRIVER_OK)) @@ -316,6 +322,10 @@ static int virtio_dev_probe(struct device *_d) virtio_config_enable(dev); return 0; + +err_probe: + if (dev->config->destroy_avq) + dev->config->destroy_avq(dev); err: virtio_add_status(dev, VIRTIO_CONFIG_S_FAILED); return err; @@ -331,6 +341,9 @@ static void virtio_dev_remove(struct device *_d) drv->remove(dev); + if (dev->config->destroy_avq) + dev->config->destroy_avq(dev); + /* Driver should have reset device. */ WARN_ON_ONCE(dev->config->get_status(dev)); @@ -489,13 +502,20 @@ EXPORT_SYMBOL_GPL(unregister_virtio_device); int virtio_device_freeze(struct virtio_device *dev) { struct virtio_driver *drv = drv_to_virtio(dev->dev.driver); + int ret; virtio_config_disable(dev); dev->failed = dev->config->get_status(dev) & VIRTIO_CONFIG_S_FAILED; - if (drv && drv->freeze) - return drv->freeze(dev); + if (drv && drv->freeze) { + ret = drv->freeze(dev); + if (ret) + return ret; + } + + if (dev->config->destroy_avq) + dev->config->destroy_avq(dev); return 0; } @@ -532,10 +552,16 @@ int virtio_device_restore(struct virtio_device *dev) if (ret) goto err; + if (dev->config->create_avq) { + ret = dev->config->create_avq(dev); + if (ret) + goto err; + } + if (drv->restore) { ret = drv->restore(dev); if (ret) - goto err; + goto err_restore; } /* If restore didn't do it, mark device DRIVER_OK ourselves. */ @@ -546,6 +572,9 @@ int virtio_device_restore(struct virtio_device *dev) return 0; +err_restore: + if (dev->config->destroy_avq) + dev->config->destroy_avq(dev); err: virtio_add_status(dev, VIRTIO_CONFIG_S_FAILED); return ret; diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c index c2524a7207cf..6b4766d5abe6 100644 --- a/drivers/virtio/virtio_pci_common.c +++ b/drivers/virtio/virtio_pci_common.c @@ -236,6 +236,9 @@ void vp_del_vqs(struct virtio_device *vdev) int i; list_for_each_entry_safe(vq, n, &vdev->vqs, list) { + if (vp_dev->is_avq(vdev, vq->index)) + continue; + if (vp_dev->per_vq_vectors) { int v = vp_dev->vqs[vq->index]->msix_vector; diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h index 4b773bd7c58c..e03af0966a4b 100644 --- a/drivers/virtio/virtio_pci_common.h +++ b/drivers/virtio/virtio_pci_common.h @@ -41,6 +41,14 @@ struct virtio_pci_vq_info { unsigned int msix_vector; }; +struct virtio_pci_admin_vq { + /* Virtqueue info associated with this admin queue. */ + struct virtio_pci_vq_info info; + /* Name of the admin queue: avq.$index. */ + char name[10]; + u16 vq_index; +}; + /* Our device structure */ struct virtio_pci_device { struct virtio_device vdev; @@ -58,9 +66,13 @@ struct virtio_pci_device { spinlock_t lock; struct list_head virtqueues; - /* array of all queues for house-keeping */ + /* Array of all virtqueues reported in the + * PCI common config num_queues field + */ struct virtio_pci_vq_info **vqs; + struct virtio_pci_admin_vq admin_vq; + /* MSI-X support */ int msix_enabled; int intx_enabled; @@ -86,6 +98,7 @@ struct virtio_pci_device { void (*del_vq)(struct virtio_pci_vq_info *info); u16 (*config_vector)(struct virtio_pci_device *vp_dev, u16 vector); + bool (*is_avq)(struct virtio_device *vdev, unsigned int index); }; /* Constants for MSI-X */ diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index d6bb68ba84e5..01c5ba346471 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -26,6 +26,16 @@ static u64 vp_get_features(struct virtio_device *vdev) return vp_modern_get_features(&vp_dev->mdev); } +static bool vp_is_avq(struct virtio_device *vdev, unsigned int index) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vdev); + + if (!virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ)) + return false; + + return index == vp_dev->admin_vq.vq_index; +} + static void vp_transport_features(struct virtio_device *vdev, u64 features) { struct virtio_pci_device *vp_dev = to_vp_device(vdev); @@ -37,6 +47,9 @@ static void vp_transport_features(struct virtio_device *vdev, u64 features) if (features & BIT_ULL(VIRTIO_F_RING_RESET)) __virtio_set_bit(vdev, VIRTIO_F_RING_RESET); + + if (features & BIT_ULL(VIRTIO_F_ADMIN_VQ)) + __virtio_set_bit(vdev, VIRTIO_F_ADMIN_VQ); } /* virtio config->finalize_features() implementation */ @@ -317,7 +330,8 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev, else notify = vp_notify; - if (index >= vp_modern_get_num_queues(mdev)) + if (index >= vp_modern_get_num_queues(mdev) && + !vp_is_avq(&vp_dev->vdev, index)) return ERR_PTR(-EINVAL); /* Check if queue is either not available or already active. */ @@ -491,6 +505,46 @@ static bool vp_get_shm_region(struct virtio_device *vdev, return true; } +static int vp_modern_create_avq(struct virtio_device *vdev) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vdev); + struct virtio_pci_admin_vq *avq; + struct virtqueue *vq; + u16 admin_q_num; + + if (!virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ)) + return 0; + + admin_q_num = vp_modern_avq_num(&vp_dev->mdev); + if (!admin_q_num) + return -EINVAL; + + avq = &vp_dev->admin_vq; + avq->vq_index = vp_modern_avq_index(&vp_dev->mdev); + sprintf(avq->name, "avq.%u", avq->vq_index); + vq = vp_dev->setup_vq(vp_dev, &vp_dev->admin_vq.info, avq->vq_index, NULL, + avq->name, NULL, VIRTIO_MSI_NO_VECTOR); + if (IS_ERR(vq)) { + dev_err(&vdev->dev, "failed to setup admin virtqueue, err=%ld", + PTR_ERR(vq)); + return PTR_ERR(vq); + } + + vp_dev->admin_vq.info.vq = vq; + vp_modern_set_queue_enable(&vp_dev->mdev, avq->info.vq->index, true); + return 0; +} + +static void vp_modern_destroy_avq(struct virtio_device *vdev) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vdev); + + if (!virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ)) + return; + + vp_dev->del_vq(&vp_dev->admin_vq.info); +} + static const struct virtio_config_ops virtio_pci_config_nodev_ops = { .get = NULL, .set = NULL, @@ -509,6 +563,8 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = { .get_shm_region = vp_get_shm_region, .disable_vq_and_reset = vp_modern_disable_vq_and_reset, .enable_vq_after_reset = vp_modern_enable_vq_after_reset, + .create_avq = vp_modern_create_avq, + .destroy_avq = vp_modern_destroy_avq, }; static const struct virtio_config_ops virtio_pci_config_ops = { @@ -529,6 +585,8 @@ static const struct virtio_config_ops virtio_pci_config_ops = { .get_shm_region = vp_get_shm_region, .disable_vq_and_reset = vp_modern_disable_vq_and_reset, .enable_vq_after_reset = vp_modern_enable_vq_after_reset, + .create_avq = vp_modern_create_avq, + .destroy_avq = vp_modern_destroy_avq, }; /* the PCI probing function */ @@ -552,6 +610,7 @@ int virtio_pci_modern_probe(struct virtio_pci_device *vp_dev) vp_dev->config_vector = vp_config_vector; vp_dev->setup_vq = setup_vq; vp_dev->del_vq = del_vq; + vp_dev->is_avq = vp_is_avq; vp_dev->isr = mdev->isr; vp_dev->vdev.id = mdev->id; diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c index 7fa70d7c8146..229a32a4cb68 100644 --- a/drivers/virtio/virtio_pci_modern_dev.c +++ b/drivers/virtio/virtio_pci_modern_dev.c @@ -714,6 +714,24 @@ void __iomem *vp_modern_map_vq_notify(struct virtio_pci_modern_device *mdev, } EXPORT_SYMBOL_GPL(vp_modern_map_vq_notify); +u16 vp_modern_avq_num(struct virtio_pci_modern_device *mdev) +{ + struct virtio_pci_modern_common_cfg __iomem *cfg; + + cfg = (struct virtio_pci_modern_common_cfg __iomem *)mdev->common; + return vp_ioread16(&cfg->admin_queue_num); +} +EXPORT_SYMBOL_GPL(vp_modern_avq_num); + +u16 vp_modern_avq_index(struct virtio_pci_modern_device *mdev) +{ + struct virtio_pci_modern_common_cfg __iomem *cfg; + + cfg = (struct virtio_pci_modern_common_cfg __iomem *)mdev->common; + return vp_ioread16(&cfg->admin_queue_index); +} +EXPORT_SYMBOL_GPL(vp_modern_avq_index); + MODULE_VERSION("0.1"); MODULE_DESCRIPTION("Modern Virtio PCI Device"); MODULE_AUTHOR("Jason Wang "); diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index 2b3438de2c4d..da9b271b54db 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -93,6 +93,8 @@ typedef void vq_callback_t(struct virtqueue *); * Returns 0 on success or error status * If disable_vq_and_reset is set, then enable_vq_after_reset must also be * set. + * @create_avq: create admin virtqueue resource. + * @destroy_avq: destroy admin virtqueue resource. */ struct virtio_config_ops { void (*get)(struct virtio_device *vdev, unsigned offset, @@ -120,6 +122,8 @@ struct virtio_config_ops { struct virtio_shm_region *region, u8 id); int (*disable_vq_and_reset)(struct virtqueue *vq); int (*enable_vq_after_reset)(struct virtqueue *vq); + int (*create_avq)(struct virtio_device *vdev); + void (*destroy_avq)(struct virtio_device *vdev); }; /* If driver didn't advertise the feature, it will never appear. */ diff --git a/include/linux/virtio_pci_modern.h b/include/linux/virtio_pci_modern.h index 067ac1d789bc..0f8737c9ae7d 100644 --- a/include/linux/virtio_pci_modern.h +++ b/include/linux/virtio_pci_modern.h @@ -10,6 +10,9 @@ struct virtio_pci_modern_common_cfg { __le16 queue_notify_data; /* read-write */ __le16 queue_reset; /* read-write */ + + __le16 admin_queue_index; /* read-only */ + __le16 admin_queue_num; /* read-only */ }; struct virtio_pci_modern_device { @@ -121,4 +124,6 @@ int vp_modern_probe(struct virtio_pci_modern_device *mdev); void vp_modern_remove(struct virtio_pci_modern_device *mdev); int vp_modern_get_queue_reset(struct virtio_pci_modern_device *mdev, u16 index); void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index); +u16 vp_modern_avq_num(struct virtio_pci_modern_device *mdev); +u16 vp_modern_avq_index(struct virtio_pci_modern_device *mdev); #endif From patchwork Tue Oct 17 13:42:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 13425184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 079DAC41513 for ; Tue, 17 Oct 2023 13:43:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343941AbjJQNnV (ORCPT ); Tue, 17 Oct 2023 09:43:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343839AbjJQNnT (ORCPT ); Tue, 17 Oct 2023 09:43:19 -0400 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2075.outbound.protection.outlook.com [40.107.102.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1220FED for ; Tue, 17 Oct 2023 06:43:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n6yAAQWiYONWi5MWdItCRsqeNaZUhmnjqE2f6ildgSRgXq/laHWuSDrXeKR62ajb5lPy27k3o8r09D3DLnNsw/X+oNo8L3vIFKXeZmWl6nTzCsbfREhVDwX/kbWSMm2rvH+04HTMg32l4xxYWVsBg1uKEqlv2dm0lhywTkhl3FPBHwKq2xYItbB1MkFvbkr9dl/n78gJ7U24ItGgO6GBADRjlKPGFG13K0AUN8sRPwn0MNroJJE3K9P+1Nbn9TLXoq1g9xDg6cPIe8ImvIrBOO1TiFxhb1Ibc+VEywh1540mLHynputJvaYe7Mg2/6R0huTz/etM3Wx+Z/HEPGK1gQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XEMLOslbjE7s0T+I7e3viuv6x390oXFNqwpcKNl9tFU=; b=cm68+xVQvnIm+444pmYag3mcAgmfE6hfW9K7IoC0f9s6rwOcLZzgkKHP+KkaUA7g8IpfZfEOm78N6fJ9rPRQbthymijC8+SQ2ywOXxlsqYH6lWkPwDNPWqhPqSbKCl3Y7RA4E6YJcgdXc/r2frwieFEhSdAixFduB7rhgRDGVNJV+Ht1VpeFRxvvexmAwrQAFQxCBnrcm9kClMlKzyWdAAR2Ja1h7/XQMKuKQG0AbxTEHQWx31XX4sD3UOPGQ5OoXnW1bo1/pfePQ+AEgMx0IVlQQKboPsesUUa7hfUOSIeuxpR5Uq2IzjfX6mIDJkTpwQCQ97Yf+IZgUqcaiObGgA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XEMLOslbjE7s0T+I7e3viuv6x390oXFNqwpcKNl9tFU=; b=ZZSrUfBeTmKquTfOcotjZMIEI2ixvDk91D10CtcnNfoBHdncdlQqyLF9usvBaWznJbGACHrz3Z5n6xB8LK1IbAC+jK6nMwQpPj1DOqX3eep3JCge5O9l6rbfbH1yjCrnX9iuHdYc53o6gGFpeXApetFWyVTbozX9M7KT7mJVeRYOvtPVUh6KfvzLJdjejmdPwHcWO/wxv6gabB5+qYWLVAD/WbknoqpjcqzH701IHxch/cLLAwm4RvxnmTHbWXh1jRWpCvNVLhlWbWcZnJzQotFcrs+MortEZ6+E/bPPLJ0p3TZCpuU1hTGqxtHmvQWsWlsYnLzbAjyud8lAsdIYZA== Received: from SA0PR13CA0027.namprd13.prod.outlook.com (2603:10b6:806:130::32) by SN7PR12MB6768.namprd12.prod.outlook.com (2603:10b6:806:268::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.36; Tue, 17 Oct 2023 13:43:14 +0000 Received: from SA2PEPF00001508.namprd04.prod.outlook.com (2603:10b6:806:130:cafe::9e) by SA0PR13CA0027.outlook.office365.com (2603:10b6:806:130::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.18 via Frontend Transport; Tue, 17 Oct 2023 13:43:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SA2PEPF00001508.mail.protection.outlook.com (10.167.242.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Tue, 17 Oct 2023 13:43:14 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:42:58 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:42:57 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.41 via Frontend Transport; Tue, 17 Oct 2023 06:42:54 -0700 From: Yishai Hadas To: , , , CC: , , , , , , , , , , Subject: [PATCH V1 vfio 4/9] virtio-pci: Introduce admin command sending function Date: Tue, 17 Oct 2023 16:42:12 +0300 Message-ID: <20231017134217.82497-5-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20231017134217.82497-1-yishaih@nvidia.com> References: <20231017134217.82497-1-yishaih@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF00001508:EE_|SN7PR12MB6768:EE_ X-MS-Office365-Filtering-Correlation-Id: 4e2fb57f-f699-49f4-982b-08dbcf17032f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: dcYYAlrKrx45sHm/jSlaimBQUbPw2ekTMAL4fJd7tV/epUIFxDnLGDUVOCQsmJv8cGipTVonIlaZXsM6zHTDpUtL9eC8bAv89fz58Gp5lABsUD1McUPKmCzOVORrsDm/QrFE6mRKDkT+mmZvHnqcUEcwxjVaDdp0QK/CtycUwS2V1vZerV02+gkyCBk6bbhmRNLSxs2uzCUMcMLdenviYW2JbwRSyxqXk2OeU/b6LUz8kfvAj68L4W+TaCM1W5lwPFv/ElCTcCn74QCpRnrEjaHy08pZUGRgP8PkOgqjpVlAjtzl/khsKKTtI4awOOZZQTNnogcdaldwV5ZdqlzAoB8lH5ZDKmp5vWwSvZL622xOIpVYrrfssa2wrKdB31Clc3Z4fpVs30zVfI9yLAVIQVDo8pQjaWCd1l0awKvFMHrnVykLxB79aV+anAt9S9DSwckekPxpYvRRADiYmxkrd6mdqjvEvSa375KO7fVLGuMM0UXh2Ufc7bOhXJR6NJrKQMW3CsyFMpHD24VJLSv75kb8vf36dc7W4tsznY/EjFphUv43scWNoIiEjW6rQrzmjGIZdph2qW+eDz8F3FOH/Lpm3TSjqmDKH8XuVu8QUMCoIAml3PlZJwwWAUhSb3uVx/2A/qcIMqaWL8CNvkUp6+OR82SAHyLbjbFMM64iFl26RftIUd3ziSEyVXow1k2eZ+gv0vJgMI6aGg/rpqS89faPhRLcJTSq49+kExCilEVa2I37CDe1U0TkhVQ5yKiNcgGWW9LHIqnxmir31vFd2w== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(136003)(39860400002)(396003)(346002)(376002)(230922051799003)(82310400011)(186009)(64100799003)(1800799009)(451199024)(40470700004)(36840700001)(46966006)(36756003)(40480700001)(40460700003)(110136005)(316002)(54906003)(6636002)(70586007)(70206006)(86362001)(82740400003)(7636003)(356005)(36860700001)(83380400001)(426003)(336012)(107886003)(26005)(2616005)(1076003)(8936002)(6666004)(7696005)(2906002)(478600001)(41300700001)(5660300002)(8676002)(47076005)(4326008)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2023 13:43:14.6142 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4e2fb57f-f699-49f4-982b-08dbcf17032f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF00001508.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6768 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Feng Liu Add support for sending admin command through admin virtqueue interface. Abort any inflight admin commands once device reset completes. To enforce the below statement from the specification [1], the admin queue is activated for the upper layer users only post of setting status to DRIVER_OK. [1] The driver MUST NOT send any buffer available notifications to the device before setting DRIVER_OK. Signed-off-by: Feng Liu Reviewed-by: Parav Pandit Signed-off-by: Yishai Hadas --- drivers/virtio/virtio_pci_common.h | 3 + drivers/virtio/virtio_pci_modern.c | 174 +++++++++++++++++++++++++++++ include/linux/virtio.h | 8 ++ include/uapi/linux/virtio_pci.h | 22 ++++ 4 files changed, 207 insertions(+) diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h index e03af0966a4b..a21b9ba01a60 100644 --- a/drivers/virtio/virtio_pci_common.h +++ b/drivers/virtio/virtio_pci_common.h @@ -44,9 +44,12 @@ struct virtio_pci_vq_info { struct virtio_pci_admin_vq { /* Virtqueue info associated with this admin queue. */ struct virtio_pci_vq_info info; + struct completion flush_done; + refcount_t refcount; /* Name of the admin queue: avq.$index. */ char name[10]; u16 vq_index; + bool abort; }; /* Our device structure */ diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index 01c5ba346471..cc159a8e6c70 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -36,6 +36,58 @@ static bool vp_is_avq(struct virtio_device *vdev, unsigned int index) return index == vp_dev->admin_vq.vq_index; } +static bool vp_modern_avq_get(struct virtio_pci_admin_vq *admin_vq) +{ + return refcount_inc_not_zero(&admin_vq->refcount); +} + +static void vp_modern_avq_put(struct virtio_pci_admin_vq *admin_vq) +{ + if (refcount_dec_and_test(&admin_vq->refcount)) + complete(&admin_vq->flush_done); +} + +static bool vp_modern_avq_is_abort(const struct virtio_pci_admin_vq *admin_vq) +{ + return READ_ONCE(admin_vq->abort); +} + +static void +vp_modern_avq_set_abort(struct virtio_pci_admin_vq *admin_vq, bool abort) +{ + /* Mark the AVQ to abort, so that inflight commands can be aborted. */ + WRITE_ONCE(admin_vq->abort, abort); +} + +static void vp_modern_avq_activate(struct virtio_device *vdev) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vdev); + struct virtio_pci_admin_vq *admin_vq = &vp_dev->admin_vq; + + if (!virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ)) + return; + + init_completion(&admin_vq->flush_done); + refcount_set(&admin_vq->refcount, 1); + vp_modern_avq_set_abort(admin_vq, false); +} + +static void vp_modern_avq_deactivate(struct virtio_device *vdev) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vdev); + struct virtio_pci_admin_vq *admin_vq = &vp_dev->admin_vq; + + if (!virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ)) + return; + + vp_modern_avq_set_abort(admin_vq, true); + /* Balance with refcount_set() during vp_modern_avq_activate */ + vp_modern_avq_put(admin_vq); + + /* Wait for all the inflight admin commands to be aborted */ + wait_for_completion(&vp_dev->admin_vq.flush_done); +} + static void vp_transport_features(struct virtio_device *vdev, u64 features) { struct virtio_pci_device *vp_dev = to_vp_device(vdev); @@ -172,6 +224,8 @@ static void vp_set_status(struct virtio_device *vdev, u8 status) /* We should never be setting status to 0. */ BUG_ON(status == 0); vp_modern_set_status(&vp_dev->mdev, status); + if (status & VIRTIO_CONFIG_S_DRIVER_OK) + vp_modern_avq_activate(vdev); } static void vp_reset(struct virtio_device *vdev) @@ -188,6 +242,9 @@ static void vp_reset(struct virtio_device *vdev) */ while (vp_modern_get_status(mdev)) msleep(1); + + vp_modern_avq_deactivate(vdev); + /* Flush pending VQ/configuration callbacks. */ vp_synchronize_vectors(vdev); } @@ -505,6 +562,121 @@ static bool vp_get_shm_region(struct virtio_device *vdev, return true; } +static int virtqueue_exec_admin_cmd(struct virtio_pci_admin_vq *admin_vq, + struct scatterlist **sgs, + unsigned int out_num, + unsigned int in_num, + void *data, + gfp_t gfp) +{ + struct virtqueue *vq; + int ret, len; + + if (!vp_modern_avq_get(admin_vq)) + return -EIO; + + vq = admin_vq->info.vq; + + ret = virtqueue_add_sgs(vq, sgs, out_num, in_num, data, gfp); + if (ret < 0) + goto out; + + if (unlikely(!virtqueue_kick(vq))) { + ret = -EIO; + goto out; + } + + while (!virtqueue_get_buf(vq, &len) && + !virtqueue_is_broken(vq) && + !vp_modern_avq_is_abort(admin_vq)) + cpu_relax(); + + if (vp_modern_avq_is_abort(admin_vq)) { + ret = -EIO; + goto out; + } +out: + vp_modern_avq_put(admin_vq); + return ret; +} + +#define VIRTIO_AVQ_SGS_MAX 4 + +static int vp_modern_admin_cmd_exec(struct virtio_device *vdev, + struct virtio_admin_cmd *cmd) +{ + struct scatterlist *sgs[VIRTIO_AVQ_SGS_MAX], hdr, stat; + struct virtio_pci_device *vp_dev = to_vp_device(vdev); + struct virtio_admin_cmd_status *va_status; + unsigned int out_num = 0, in_num = 0; + struct virtio_admin_cmd_hdr *va_hdr; + struct virtqueue *avq; + u16 status; + int ret; + + avq = virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ) ? + vp_dev->admin_vq.info.vq : NULL; + if (!avq) + return -EOPNOTSUPP; + + va_status = kzalloc(sizeof(*va_status), GFP_KERNEL); + if (!va_status) + return -ENOMEM; + + va_hdr = kzalloc(sizeof(*va_hdr), GFP_KERNEL); + if (!va_hdr) { + ret = -ENOMEM; + goto err_alloc; + } + + va_hdr->opcode = cmd->opcode; + va_hdr->group_type = cmd->group_type; + va_hdr->group_member_id = cmd->group_member_id; + + /* Add header */ + sg_init_one(&hdr, va_hdr, sizeof(*va_hdr)); + sgs[out_num] = &hdr; + out_num++; + + if (cmd->data_sg) { + sgs[out_num] = cmd->data_sg; + out_num++; + } + + /* Add return status */ + sg_init_one(&stat, va_status, sizeof(*va_status)); + sgs[out_num + in_num] = &stat; + in_num++; + + if (cmd->result_sg) { + sgs[out_num + in_num] = cmd->result_sg; + in_num++; + } + + ret = virtqueue_exec_admin_cmd(&vp_dev->admin_vq, sgs, + out_num, in_num, + sgs, GFP_KERNEL); + if (ret) { + dev_err(&vdev->dev, + "Failed to execute command on admin vq: %d\n.", ret); + goto err_cmd_exec; + } + + status = le16_to_cpu(va_status->status); + if (status != VIRTIO_ADMIN_STATUS_OK) { + dev_err(&vdev->dev, + "admin command error: status(%#x) qualifier(%#x)\n", + status, le16_to_cpu(va_status->status_qualifier)); + ret = -status; + } + +err_cmd_exec: + kfree(va_hdr); +err_alloc: + kfree(va_status); + return ret; +} + static int vp_modern_create_avq(struct virtio_device *vdev) { struct virtio_pci_device *vp_dev = to_vp_device(vdev); @@ -530,6 +702,7 @@ static int vp_modern_create_avq(struct virtio_device *vdev) return PTR_ERR(vq); } + refcount_set(&vp_dev->admin_vq.refcount, 0); vp_dev->admin_vq.info.vq = vq; vp_modern_set_queue_enable(&vp_dev->mdev, avq->info.vq->index, true); return 0; @@ -542,6 +715,7 @@ static void vp_modern_destroy_avq(struct virtio_device *vdev) if (!virtio_has_feature(vdev, VIRTIO_F_ADMIN_VQ)) return; + WARN_ON(refcount_read(&vp_dev->admin_vq.refcount)); vp_dev->del_vq(&vp_dev->admin_vq.info); } diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 4cc614a38376..b0201747a263 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -103,6 +103,14 @@ int virtqueue_resize(struct virtqueue *vq, u32 num, int virtqueue_reset(struct virtqueue *vq, void (*recycle)(struct virtqueue *vq, void *buf)); +struct virtio_admin_cmd { + __le16 opcode; + __le16 group_type; + __le64 group_member_id; + struct scatterlist *data_sg; + struct scatterlist *result_sg; +}; + /** * struct virtio_device - representation of a device using virtio * @index: unique position on the virtio bus diff --git a/include/uapi/linux/virtio_pci.h b/include/uapi/linux/virtio_pci.h index f703afc7ad31..68eacc9676dc 100644 --- a/include/uapi/linux/virtio_pci.h +++ b/include/uapi/linux/virtio_pci.h @@ -207,4 +207,26 @@ struct virtio_pci_cfg_cap { #endif /* VIRTIO_PCI_NO_MODERN */ +/* Admin command status. */ +#define VIRTIO_ADMIN_STATUS_OK 0 + +struct __packed virtio_admin_cmd_hdr { + __le16 opcode; + /* + * 1 - SR-IOV + * 2-65535 - reserved + */ + __le16 group_type; + /* Unused, reserved for future extensions. */ + __u8 reserved1[12]; + __le64 group_member_id; +}; + +struct __packed virtio_admin_cmd_status { + __le16 status; + __le16 status_qualifier; + /* Unused, reserved for future extensions. */ + __u8 reserved2[4]; +}; + #endif From patchwork Tue Oct 17 13:42:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 13425185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9A36CDB482 for ; Tue, 17 Oct 2023 13:43:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343949AbjJQNn0 (ORCPT ); Tue, 17 Oct 2023 09:43:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343839AbjJQNnY (ORCPT ); Tue, 17 Oct 2023 09:43:24 -0400 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2082.outbound.protection.outlook.com [40.107.93.82]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9ED20F0 for ; Tue, 17 Oct 2023 06:43:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aRsUzWxoR8cuAwBB/uycR5Ig6g/YlRjFXGmnjH9Md/hzhoL+HQj4GowQnk+3KuKNM0dbEsPQkkNFELFg/MJBwOGZ9V6e/k279hN+hitMl0/J3fJ6502rjOTyDQs5uvsb30tp5JPg+SvYpg0VtPIATnD8VKC3dqq1/035UP85nu2Vh3QA4dqjXghI/nHX1zEqGYPSryfZ4WKXLzrJOI3i0F3bs2lOk4nMiZ/RxEU/tdVBpKQBtnKA6318VOGFwYIFtFOX2g+efHqppULMuLtRBFn4heZhRJ4QfvTCM/pUHAXzmCUDBI8U1fXS8j4kvDZSXLoYwSOadwLhZQ5AXz4o7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=EQgtRhF9nwY4OCM4TqXlToBYKLhPSI0T2PByC8kLY30=; b=CJPFZZDcJ3Jd1VcN/DDkXenB9GXON3pm8s/IXoLymZMLgVMw2KWTZZ5OR5WzErqA/3U9CSVuuLUbdTDU1fpqu6PIdNvQISI5yWMdwCZG2mPWZPsd87QNSqfrtf635S8eWULO6qkzjNao9I+LBuF0TUZ4sDvjLoW26YwuUTx1KIkpgor8ch3HM6TadXiZpGx4eCAqpsqkDvQhpmtfuDXv55qVdfiqN4o8KOXXA1fSRZ/fG/iitQThgbdyx3M7Qo4r/5jON7Rkus61FGyij7TkmTF3Ge0QHQLDj4iT5m31ayJgqsxw0G5gr34yB9L7PdWTJU2aaLEPoQ1i2ju306Rz2Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EQgtRhF9nwY4OCM4TqXlToBYKLhPSI0T2PByC8kLY30=; b=lIot3yfkWg6J99dOJCkO+p7v4A9u6DFusD6sQaRyXzrwdhBst5MJKbfjY/YKXXVxZVBUO8OtptseypUDXRT8jCL4AB0KAu1BahsmOx98OYxWaa+1Pb8gLjgLOM3NLVSLJUFn+IkLJvflukFhraXk/8JSdrAo3EP7Cvgdf6EZa3jJ9TjMGhbpAu1rPhojnMKX0NkERg2DZFY/KiCtW3w86/rvzksQZYeJKCpqAWn0PaypdiG02UgQ1rg9FS6WThHQwUgeRfyq3CtfUiTm+gDpi746TzkFYkqzxck0GeZjaxg6nOuGBi93x6ywwwsp8T7tAEwO5QEg8XtDoqVgtKFTcQ== Received: from BN9PR03CA0725.namprd03.prod.outlook.com (2603:10b6:408:110::10) by MW3PR12MB4571.namprd12.prod.outlook.com (2603:10b6:303:5c::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.35; Tue, 17 Oct 2023 13:43:20 +0000 Received: from SA2PEPF00001507.namprd04.prod.outlook.com (2603:10b6:408:110:cafe::65) by BN9PR03CA0725.outlook.office365.com (2603:10b6:408:110::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.36 via Frontend Transport; Tue, 17 Oct 2023 13:43:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SA2PEPF00001507.mail.protection.outlook.com (10.167.242.39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Tue, 17 Oct 2023 13:43:19 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:02 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:02 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.41 via Frontend Transport; Tue, 17 Oct 2023 06:42:58 -0700 From: Yishai Hadas To: , , , CC: , , , , , , , , , , Subject: [PATCH V1 vfio 5/9] virtio-pci: Introduce admin commands Date: Tue, 17 Oct 2023 16:42:13 +0300 Message-ID: <20231017134217.82497-6-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20231017134217.82497-1-yishaih@nvidia.com> References: <20231017134217.82497-1-yishaih@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF00001507:EE_|MW3PR12MB4571:EE_ X-MS-Office365-Filtering-Correlation-Id: dfb71b83-c02e-4f42-d35b-08dbcf170650 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: raWoEgi5i7RF6GWIoeeiHtFBY68srW/RS7tMzEeSXund998Yx+FhhY3z4DTL2cNqnXmy+oyEBqddVh/IyF73a9peMPW4FFv6sA8TnDJesFI3gt8ZrVXYmZyBjKeyEkY0AdaakXwriFMiS7TPY0lLY7ULp+b0P1jp8wR3Hz+PrjcnakoZuosBuN0TMJ6JTfStYUgjyzB3IJFneUi3FI1aE7b67hjReahWbLYsmrJgQY847LIenuIQNEBKWnSos2Nf1IMK27uBbDoJe3+K06ElqWua4zSIsfhVabToPp+X0nAbg52CbhqOrrSu0WPb0xTkxcrD7K82JKc/NaMFXhDV+/IgsOWpDxCT79bECcB8lQ0VHCpR147rh6ziYgozRRcxKdcvhp8ExdOJK3UIE3YqVDZzld44S/JJOUyYHqxweSrczS2c25bc3k0xld6JEmbLhOD9zhjNOgrKiI8h2mbaEP+CIImbJGnFR6Y5ZjHNu3djo1FPK4xmgZC1PVwmiGhahEd2vlD/jTTbFu4zeiMPJqNvQPzplJ0TNDhJLG2Zu62kQFB4XKyPRKR/R6m4i8KWsUZJU7eDl28SX5qYSd2i8c7QoC0e9QIWRMNH2gMBFyw7e3N+oiOgjBgfl1PporqXeNsnb+8A6/OFF0kqkQUmW3IeoQl2ZfkVP7tn+ohjz66MVfcdmuSuvRKe7eLmscfyLdNGUtgOxS9otY5R1LKuRVP5XH3VCJgG3dqVbL09L11c453YhbluL6D3lpBzxkg9 X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(136003)(376002)(39860400002)(346002)(396003)(230922051799003)(451199024)(186009)(82310400011)(64100799003)(1800799009)(40470700004)(36840700001)(46966006)(7696005)(316002)(40460700003)(40480700001)(478600001)(70206006)(110136005)(70586007)(6666004)(54906003)(6636002)(356005)(83380400001)(47076005)(36860700001)(82740400003)(86362001)(336012)(2616005)(107886003)(26005)(1076003)(41300700001)(426003)(5660300002)(36756003)(7636003)(8936002)(4326008)(2906002)(8676002)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2023 13:43:19.8808 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dfb71b83-c02e-4f42-d35b-08dbcf170650 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF00001507.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4571 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Feng Liu Introduces admin commands, as follow: The "list query" command can be used by the driver to query the set of admin commands supported by the virtio device. The "list use" command is used to inform the virtio device which admin commands the driver will use. The "legacy common cfg rd/wr" commands are used to read from/write into the legacy common configuration structure. The "legacy dev cfg rd/wr" commands are used to read from/write into the legacy device configuration structure. The "notify info" command is used to query the notification region information. Signed-off-by: Feng Liu Reviewed-by: Parav Pandit Reviewed-by: Jiri Pirko Signed-off-by: Yishai Hadas --- include/uapi/linux/virtio_pci.h | 44 +++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/include/uapi/linux/virtio_pci.h b/include/uapi/linux/virtio_pci.h index 68eacc9676dc..6e42c211fc08 100644 --- a/include/uapi/linux/virtio_pci.h +++ b/include/uapi/linux/virtio_pci.h @@ -210,6 +210,23 @@ struct virtio_pci_cfg_cap { /* Admin command status. */ #define VIRTIO_ADMIN_STATUS_OK 0 +/* Admin command opcode. */ +#define VIRTIO_ADMIN_CMD_LIST_QUERY 0x0 +#define VIRTIO_ADMIN_CMD_LIST_USE 0x1 + +/* Admin command group type. */ +#define VIRTIO_ADMIN_GROUP_TYPE_SRIOV 0x1 + +/* Transitional device admin command. */ +#define VIRTIO_ADMIN_CMD_LEGACY_COMMON_CFG_WRITE 0x2 +#define VIRTIO_ADMIN_CMD_LEGACY_COMMON_CFG_READ 0x3 +#define VIRTIO_ADMIN_CMD_LEGACY_DEV_CFG_WRITE 0x4 +#define VIRTIO_ADMIN_CMD_LEGACY_DEV_CFG_READ 0x5 +#define VIRTIO_ADMIN_CMD_LEGACY_NOTIFY_INFO 0x6 + +/* Increment MAX_OPCODE to next value when new opcode is added */ +#define VIRTIO_ADMIN_MAX_CMD_OPCODE 0x6 + struct __packed virtio_admin_cmd_hdr { __le16 opcode; /* @@ -229,4 +246,31 @@ struct __packed virtio_admin_cmd_status { __u8 reserved2[4]; }; +struct __packed virtio_admin_cmd_legacy_wr_data { + __u8 offset; /* Starting offset of the register(s) to write. */ + __u8 reserved[7]; + __u8 registers[]; +}; + +struct __packed virtio_admin_cmd_legacy_rd_data { + __u8 offset; /* Starting offset of the register(s) to read. */ +}; + +#define VIRTIO_ADMIN_CMD_NOTIFY_INFO_FLAGS_END 0 +#define VIRTIO_ADMIN_CMD_NOTIFY_INFO_FLAGS_OWNER_DEV 0x1 +#define VIRTIO_ADMIN_CMD_NOTIFY_INFO_FLAGS_OWNER_MEM 0x2 + +#define VIRTIO_ADMIN_CMD_MAX_NOTIFY_INFO 4 + +struct __packed virtio_admin_cmd_notify_info_data { + __u8 flags; /* 0 = end of list, 1 = owner device, 2 = member device */ + __u8 bar; /* BAR of the member or the owner device */ + __u8 padding[6]; + __le64 offset; /* Offset within bar. */ +}; + +struct virtio_admin_cmd_notify_info_result { + struct virtio_admin_cmd_notify_info_data entries[VIRTIO_ADMIN_CMD_MAX_NOTIFY_INFO]; +}; + #endif From patchwork Tue Oct 17 13:42:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 13425187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16D99C41513 for ; Tue, 17 Oct 2023 13:43:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343839AbjJQNn1 (ORCPT ); Tue, 17 Oct 2023 09:43:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343951AbjJQNnZ (ORCPT ); Tue, 17 Oct 2023 09:43:25 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2042.outbound.protection.outlook.com [40.107.94.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67BB3F2 for ; Tue, 17 Oct 2023 06:43:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AWx+21UEpM7DUqmnGplXYy94NvkOdWUGOGFfXf+vIftKfq9dnD+Kt5RnoP3lo0VJCDpQg13a7hhUD2d+N5YYM8Gkwx76qEgoDHMZ3XJ2YjadiacF8/zUoFHTb93kyeAmn8ugkUty8cF6y6Ucv8jwRNO+xfSY84ZIjLlgh0MSdsYnvErUMd44rZK7r0HhHg+Kop3O0/6mYjkbL5lFKcqGZRX/u8rrDHuyOqxAhAS4aEmczX/ymvNwrEgen4obTS1nNj6UW3HQ32lfPn/kAsDfyFDkOgDsXWOOCRXqW6FeV0sPoyomKWJ8My5Cv+xR2GRyuEGH87g0aWalO8gHDwQauw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/Hzt3LlE9oKpkaEQVGO0qb6WbPswtHZG+hctn50Uums=; b=jX0V+uXs0zMW3Toa5SJGp4dNB8Wu/P9tegV0iyXbecwYjgz2BtcRKl556Gp/AtR5Yxwel2Xt16+7VqjM8T9hKrTIg1OITBWVUey90V1U/7VlTtOmdT/KubSd3drjlT7QWeiN5VRcL3Ba9BPZKD7ZjUrN/pdZ+h4H8uQxVHFpnKpAeEL1ht/MxBXD8wraDBQA5DyRr4fVe8C4EPfzAlD29OwX9EeTy+qJagizus198GfAkcINTfxSC2QXckzSjk0IA3oPDKAhG5dED8DedBmvWaYU2XYmaHlw+N/I5OjdyE5QQrhV24p9jvBy+o0Cri7WhO7jiJUbfUiZfcoBgSY5qw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/Hzt3LlE9oKpkaEQVGO0qb6WbPswtHZG+hctn50Uums=; b=Ot/0WPWnuIWsLZnHv6HITOAUpGbjtj0hDU5Bnjz69V2kO90vGHmpgqQ0DkEyD7f6d8sSghzg3iS+oiqMd0cExHQVwqsJUtimciC1NMc/1qsWKWodZmdOU5xg8U97gpxVYBwbElLodi+T1S6agrtqgxf8CkqpitKOQqgvO6mevSaVwIxaaO0bZYQ5iP+kV1A+K9YysGWEXTa4qYAopmUdEC38bZ92F/EkHP2358jvQdpgUK57YQoTbFtsGLDOpiW0dPEsovCWXkCwBJaZVtotUKcadL+LLMeRg88NnQ2xc34i2TcY/16z2BG2sJmagPpmmH4st0g/E+nltcNDf8CSfA== Received: from MN2PR05CA0007.namprd05.prod.outlook.com (2603:10b6:208:c0::20) by DS7PR12MB6334.namprd12.prod.outlook.com (2603:10b6:8:95::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.35; Tue, 17 Oct 2023 13:43:20 +0000 Received: from BL02EPF0001A103.namprd05.prod.outlook.com (2603:10b6:208:c0:cafe::3) by MN2PR05CA0007.outlook.office365.com (2603:10b6:208:c0::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.18 via Frontend Transport; Tue, 17 Oct 2023 13:43:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BL02EPF0001A103.mail.protection.outlook.com (10.167.241.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Tue, 17 Oct 2023 13:43:20 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:06 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:06 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.41 via Frontend Transport; Tue, 17 Oct 2023 06:43:02 -0700 From: Yishai Hadas To: , , , CC: , , , , , , , , , , Subject: [PATCH V1 vfio 6/9] virtio-pci: Introduce APIs to execute legacy IO admin commands Date: Tue, 17 Oct 2023 16:42:14 +0300 Message-ID: <20231017134217.82497-7-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20231017134217.82497-1-yishaih@nvidia.com> References: <20231017134217.82497-1-yishaih@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0001A103:EE_|DS7PR12MB6334:EE_ X-MS-Office365-Filtering-Correlation-Id: b8ba6ff7-a8e7-4523-8586-08dbcf1706bc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9oOB0cT0k1vNWuPUWbJ4L+Lz5hMSl2m3tF6P5BMqVoLMj3RWTMiNs1ZdTGw+ziH0eH6Bksutw7nTKYIrtPu87g8UkUYzJi4FklQnxhoW/dkv0PT6rUorkrgNqXzNXrx1oaqtB5UffZmuDgm/Sf2sYg4fEa5A9jt05vt5weuaWW31ceKICW0XtrFJpD1k+zZ8CqkAt79zKUnXYRYSYfjnUjxzQntgfzw7GgVtjhTGsIA9aQ1CvQ17qZDVeX43Ipco4ivgURHQoKirOtUyBhSsfwXTI6hnzpYu2oziGUpFK4fXt7YsXvWfb23N8gxZCHypbhLVvweuPRE+mqxP2i4w8RqLs/lGovqpgK3vLOfckL1FjR8XQbkyMF7WyKzNEYupe2Brqls0R2E/o3xp1s3YeDxTZGkfftg3uAakx+bP/agTjk/KbAoJU5+vO1ruFDIdZfGLaooCm0BnAshtR+8kwVk4iSU2MbjKywo+gnJFOR+XWHWDz/bTtSzcC1BkcAYFfNgsHxIr6r2B1RmrjyfSXwGosCZLg3el7YH8gQPqoJQMSR++CO3YvuC2/t2PXcYF+DMcpHvabUbUERwKCKZeBU7fhVJ6e5Lk/2Y4w9W/8JsvXTBIOufESauq7wZQA3/MppqqO6wDJ182R5IYI9IVBE4r8tidu6oN1mP2G37IiOmhmCeyBkWl90KUtFjdWkLgJt1y86zyHAfDGlJWmkjRpawP6MhwiyHBR4NE8rxpvJ2XwwKpBGGfvzSq9Amz2PW8 X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(346002)(39860400002)(396003)(376002)(136003)(230922051799003)(451199024)(64100799003)(82310400011)(186009)(1800799009)(36840700001)(40470700004)(46966006)(336012)(40480700001)(40460700003)(82740400003)(36756003)(356005)(47076005)(83380400001)(36860700001)(6666004)(7636003)(26005)(7696005)(2906002)(54906003)(316002)(6636002)(70206006)(1076003)(426003)(2616005)(70586007)(478600001)(107886003)(110136005)(86362001)(41300700001)(8936002)(8676002)(4326008)(5660300002)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2023 13:43:20.5241 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b8ba6ff7-a8e7-4523-8586-08dbcf1706bc X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0001A103.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6334 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce APIs to execute legacy IO admin commands. It includes: list_query/use, io_legacy_read/write, io_legacy_notify_info. Those APIs will be used by the next patches from this series. Signed-off-by: Yishai Hadas --- drivers/virtio/virtio_pci_common.c | 11 ++ drivers/virtio/virtio_pci_common.h | 2 + drivers/virtio/virtio_pci_modern.c | 206 +++++++++++++++++++++++++++++ include/linux/virtio_pci_admin.h | 18 +++ 4 files changed, 237 insertions(+) create mode 100644 include/linux/virtio_pci_admin.h diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c index 6b4766d5abe6..212d68401d2c 100644 --- a/drivers/virtio/virtio_pci_common.c +++ b/drivers/virtio/virtio_pci_common.c @@ -645,6 +645,17 @@ static struct pci_driver virtio_pci_driver = { .sriov_configure = virtio_pci_sriov_configure, }; +struct virtio_device *virtio_pci_vf_get_pf_dev(struct pci_dev *pdev) +{ + struct virtio_pci_device *pf_vp_dev; + + pf_vp_dev = pci_iov_get_pf_drvdata(pdev, &virtio_pci_driver); + if (IS_ERR(pf_vp_dev)) + return NULL; + + return &pf_vp_dev->vdev; +} + module_pci_driver(virtio_pci_driver); MODULE_AUTHOR("Anthony Liguori "); diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h index a21b9ba01a60..2785e61ed668 100644 --- a/drivers/virtio/virtio_pci_common.h +++ b/drivers/virtio/virtio_pci_common.h @@ -155,4 +155,6 @@ static inline void virtio_pci_legacy_remove(struct virtio_pci_device *vp_dev) int virtio_pci_modern_probe(struct virtio_pci_device *); void virtio_pci_modern_remove(struct virtio_pci_device *); +struct virtio_device *virtio_pci_vf_get_pf_dev(struct pci_dev *pdev); + #endif diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index cc159a8e6c70..00b65e20b2f5 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -719,6 +719,212 @@ static void vp_modern_destroy_avq(struct virtio_device *vdev) vp_dev->del_vq(&vp_dev->admin_vq.info); } +/* + * virtio_pci_admin_list_query - Provides to driver list of commands + * supported for the PCI VF. + * @dev: VF pci_dev + * @buf: buffer to hold the returned list + * @buf_size: size of the given buffer + * + * Returns 0 on success, or negative on failure. + */ +int virtio_pci_admin_list_query(struct pci_dev *pdev, u8 *buf, int buf_size) +{ + struct virtio_device *virtio_dev = virtio_pci_vf_get_pf_dev(pdev); + struct virtio_admin_cmd cmd = {}; + struct scatterlist result_sg; + + if (!virtio_dev) + return -ENODEV; + + sg_init_one(&result_sg, buf, buf_size); + cmd.opcode = cpu_to_le16(VIRTIO_ADMIN_CMD_LIST_QUERY); + cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SRIOV); + cmd.result_sg = &result_sg; + + return vp_modern_admin_cmd_exec(virtio_dev, &cmd); +} +EXPORT_SYMBOL_GPL(virtio_pci_admin_list_query); + +/* + * virtio_pci_admin_list_use - Provides to device list of commands + * used for the PCI VF. + * @dev: VF pci_dev + * @buf: buffer which holds the list + * @buf_size: size of the given buffer + * + * Returns 0 on success, or negative on failure. + */ +int virtio_pci_admin_list_use(struct pci_dev *pdev, u8 *buf, int buf_size) +{ + struct virtio_device *virtio_dev = virtio_pci_vf_get_pf_dev(pdev); + struct virtio_admin_cmd cmd = {}; + struct scatterlist data_sg; + + if (!virtio_dev) + return -ENODEV; + + sg_init_one(&data_sg, buf, buf_size); + cmd.opcode = cpu_to_le16(VIRTIO_ADMIN_CMD_LIST_USE); + cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SRIOV); + cmd.data_sg = &data_sg; + + return vp_modern_admin_cmd_exec(virtio_dev, &cmd); +} +EXPORT_SYMBOL_GPL(virtio_pci_admin_list_use); + +/* + * virtio_pci_admin_legacy_io_write - Write legacy registers of a member device + * @dev: VF pci_dev + * @opcode: op code of the io write command + * @offset: starting byte offset within the registers to write to + * @size: size of the data to write + * @buf: buffer which holds the data + * + * Returns 0 on success, or negative on failure. + */ +int virtio_pci_admin_legacy_io_write(struct pci_dev *pdev, u16 opcode, + u8 offset, u8 size, u8 *buf) +{ + struct virtio_device *virtio_dev = virtio_pci_vf_get_pf_dev(pdev); + struct virtio_admin_cmd_legacy_wr_data *data; + struct virtio_admin_cmd cmd = {}; + struct scatterlist data_sg; + int vf_id; + int ret; + + if (!virtio_dev) + return -ENODEV; + + vf_id = pci_iov_vf_id(pdev); + if (vf_id < 0) + return vf_id; + + data = kzalloc(sizeof(*data) + size, GFP_KERNEL); + if (!data) + return -ENOMEM; + + data->offset = offset; + memcpy(data->registers, buf, size); + sg_init_one(&data_sg, data, sizeof(*data) + size); + cmd.opcode = cpu_to_le16(opcode); + cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SRIOV); + cmd.group_member_id = cpu_to_le64(vf_id + 1); + cmd.data_sg = &data_sg; + ret = vp_modern_admin_cmd_exec(virtio_dev, &cmd); + + kfree(data); + return ret; +} +EXPORT_SYMBOL_GPL(virtio_pci_admin_legacy_io_write); + +/* + * virtio_pci_admin_legacy_io_read - Read legacy registers of a member device + * @dev: VF pci_dev + * @opcode: op code of the io read command + * @offset: starting byte offset within the registers to read from + * @size: size of the data to be read + * @buf: buffer to hold the returned data + * + * Returns 0 on success, or negative on failure. + */ +int virtio_pci_admin_legacy_io_read(struct pci_dev *pdev, u16 opcode, + u8 offset, u8 size, u8 *buf) +{ + struct virtio_device *virtio_dev = virtio_pci_vf_get_pf_dev(pdev); + struct virtio_admin_cmd_legacy_rd_data *data; + struct scatterlist data_sg, result_sg; + struct virtio_admin_cmd cmd = {}; + int vf_id; + int ret; + + if (!virtio_dev) + return -ENODEV; + + vf_id = pci_iov_vf_id(pdev); + if (vf_id < 0) + return vf_id; + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return -ENOMEM; + + data->offset = offset; + sg_init_one(&data_sg, data, sizeof(*data)); + sg_init_one(&result_sg, buf, size); + cmd.opcode = cpu_to_le16(opcode); + cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SRIOV); + cmd.group_member_id = cpu_to_le64(vf_id + 1); + cmd.data_sg = &data_sg; + cmd.result_sg = &result_sg; + ret = vp_modern_admin_cmd_exec(virtio_dev, &cmd); + + kfree(data); + return ret; +} +EXPORT_SYMBOL_GPL(virtio_pci_admin_legacy_io_read); + +/* + * virtio_pci_admin_legacy_io_notify_info - Read the queue notification + * information for legacy interface + * @dev: VF pci_dev + * @req_bar_flags: requested bar flags + * @bar: on output the BAR number of the member device + * @bar_offset: on output the offset within bar + * + * Returns 0 on success, or negative on failure. + */ +int virtio_pci_admin_legacy_io_notify_info(struct pci_dev *pdev, + u8 req_bar_flags, u8 *bar, + u64 *bar_offset) +{ + struct virtio_device *virtio_dev = virtio_pci_vf_get_pf_dev(pdev); + struct virtio_admin_cmd_notify_info_result *result; + struct virtio_admin_cmd cmd = {}; + struct scatterlist result_sg; + int vf_id; + int ret; + + if (!virtio_dev) + return -ENODEV; + + vf_id = pci_iov_vf_id(pdev); + if (vf_id < 0) + return vf_id; + + result = kzalloc(sizeof(*result), GFP_KERNEL); + if (!result) + return -ENOMEM; + + sg_init_one(&result_sg, result, sizeof(*result)); + cmd.opcode = cpu_to_le16(VIRTIO_ADMIN_CMD_LEGACY_NOTIFY_INFO); + cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SRIOV); + cmd.group_member_id = cpu_to_le64(vf_id + 1); + cmd.result_sg = &result_sg; + ret = vp_modern_admin_cmd_exec(virtio_dev, &cmd); + if (!ret) { + struct virtio_admin_cmd_notify_info_data *entry; + int i; + + ret = -ENOENT; + for (i = 0; i < VIRTIO_ADMIN_CMD_MAX_NOTIFY_INFO; i++) { + entry = &result->entries[i]; + if (entry->flags == VIRTIO_ADMIN_CMD_NOTIFY_INFO_FLAGS_END) + break; + if (entry->flags != req_bar_flags) + continue; + *bar = entry->bar; + *bar_offset = le64_to_cpu(entry->offset); + ret = 0; + break; + } + } + + kfree(result); + return ret; +} +EXPORT_SYMBOL_GPL(virtio_pci_admin_legacy_io_notify_info); + static const struct virtio_config_ops virtio_pci_config_nodev_ops = { .get = NULL, .set = NULL, diff --git a/include/linux/virtio_pci_admin.h b/include/linux/virtio_pci_admin.h new file mode 100644 index 000000000000..cb916a4bc1b1 --- /dev/null +++ b/include/linux/virtio_pci_admin.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_VIRTIO_PCI_ADMIN_H +#define _LINUX_VIRTIO_PCI_ADMIN_H + +#include +#include + +int virtio_pci_admin_list_use(struct pci_dev *pdev, u8 *buf, int buf_size); +int virtio_pci_admin_list_query(struct pci_dev *pdev, u8 *buf, int buf_size); +int virtio_pci_admin_legacy_io_write(struct pci_dev *pdev, u16 opcode, + u8 offset, u8 size, u8 *buf); +int virtio_pci_admin_legacy_io_read(struct pci_dev *pdev, u16 opcode, + u8 offset, u8 size, u8 *buf); +int virtio_pci_admin_legacy_io_notify_info(struct pci_dev *pdev, + u8 req_bar_flags, u8 *bar, + u64 *bar_offset); + +#endif /* _LINUX_VIRTIO_PCI_ADMIN_H */ From patchwork Tue Oct 17 13:42:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 13425186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FD6FCDB483 for ; Tue, 17 Oct 2023 13:43:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343954AbjJQNn2 (ORCPT ); Tue, 17 Oct 2023 09:43:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343910AbjJQNn1 (ORCPT ); Tue, 17 Oct 2023 09:43:27 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2088.outbound.protection.outlook.com [40.107.220.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 895FBED for ; Tue, 17 Oct 2023 06:43:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QMF/bdv8/Bc+TfcXYRqR9dF/eJ/pJSFSKHMq/17Bsc7akfyMEaaxETZzlHSDLqZb/LokX/gnOXbP1Ba9Pc6Uscd1f01wI/o+w1REDz6NLsG8mfB+ACE3QTanxDh1npK7tklc0axUTbOt7TtWniKmkNWbfTVw3G1V5t1vgqgPCwSpeXi1CdOwNfpd9NGeD1Jp6lUyNsetqy/JZmOIl0qtBS0nfpr/bWKi0/3f86RnfevC1CHJCbFSov6UWg2DP4TByHSQ+cwlcm2I2YynuY04Hu3b+X+rNiyi6vuBkKoNIjobKTgZkbOuvlCyWV+74SVxEFTgcTM6qTxW4KEePSQqwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=P1eFzxJ/LjJHaZZ59rqE75yCcH0bNXHQfZmDKNhUSPI=; b=G6pGwx7GgOo9uWqF2cSVkBfltgdCe9Z45U1HtMS5UiMXeVGKh33PMU0hMyPuhbzMI7vfTqU31xwUNTbb9Cm2l5FCanu9wetnZx6iJWRykotzi6EBm5SD8SUju/ZtIgCuKGACMn2HY5+/R+XmhhLHsqQcDXoOSJEWL4PTt669ST5onQKQ+0wi5ouqtd+ZEHEsgPuNQITB57l40YAKzPFbZRYrZdrJSksGSQNDubWreLnvO/wl69R8tLDVZxXhjJYTORCpKNd6/T8x3Pm6hdLx381QkXOq390wFy5XC7ULWBV/2K4tq6DGs9uNTwBaqLiPyobZlpiqQ4IF13MtPHarbA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=P1eFzxJ/LjJHaZZ59rqE75yCcH0bNXHQfZmDKNhUSPI=; b=WsRlKLx0FJ/2F3Bk10XcrSrjKbC/Ls736gNa5zBobVYJOve/ZYcjRljkzd6CchO3FQmEOyOLhGd/N1QJ4jmyBcdiGwagspPu8z7NftnT2JFNsptmvyLh9ux/k9i5rydQfrXiNrkkQ12vCSpw7EvBRKznD2339Zp7URj3uH3J8cGMdcpcbEJ9svbiHGxYw6eRx5WKwfD92o5Izt60NsmfxlJ1RqOYr5iJKQnJ1wqPCWe+QZpJfVjS8VQU2FI2bPRnOKlWRTXvN9h2+fx08bIpFBjdEeNHq3PiixPJWSHYsHEP5Ivh0e2Zj/fDuZX5tI84i3TgWqitLTq8ymzuYFMBKg== Received: from MN2PR20CA0062.namprd20.prod.outlook.com (2603:10b6:208:235::31) by CY8PR12MB7361.namprd12.prod.outlook.com (2603:10b6:930:53::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.36; Tue, 17 Oct 2023 13:43:23 +0000 Received: from BL02EPF0001A108.namprd05.prod.outlook.com (2603:10b6:208:235:cafe::91) by MN2PR20CA0062.outlook.office365.com (2603:10b6:208:235::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.36 via Frontend Transport; Tue, 17 Oct 2023 13:43:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BL02EPF0001A108.mail.protection.outlook.com (10.167.241.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Tue, 17 Oct 2023 13:43:22 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:10 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:10 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.41 via Frontend Transport; Tue, 17 Oct 2023 06:43:06 -0700 From: Yishai Hadas To: , , , CC: , , , , , , , , , , Subject: [PATCH V1 vfio 7/9] vfio/pci: Expose vfio_pci_core_setup_barmap() Date: Tue, 17 Oct 2023 16:42:15 +0300 Message-ID: <20231017134217.82497-8-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20231017134217.82497-1-yishaih@nvidia.com> References: <20231017134217.82497-1-yishaih@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0001A108:EE_|CY8PR12MB7361:EE_ X-MS-Office365-Filtering-Correlation-Id: 6dc55722-fd0f-4f8f-cd1a-08dbcf170805 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GDtwXyRo0VtVmdwGjEnbf24p+LzE5xgxYYlkbXMhl8o60HZHHqsWU8YiXDsEEYKbOLDKzessAnVWYL4bgqlimsrhDlYAS/DKjBNosu0A5VKt8MXvlTMtQnlRS1T9lnIaE8A9bA66FH4HtkBbEsxL/IrX5YV7d+nRf+WVKHEV0rr6lPgxctxvDfBSlXtN2BIZN2DaIry2UcKjrOJt8nSqbRtQvJD0F7UtO1a4rYcs56FPCcWtqqGNMMs+oWqKBPvPL3sKiZn4qBbXrTwHybZJ9knUOciYcPjms9TsgqVSMoIL8Itc6E0NSbUVHGBgKNtgRTw7+uNS0fxy+ENwQY9A+D0qAJmks3yQmkHvtiMfs34JjHqMa6ZXNaWzn3QyAisZNvWmv9Rhl1IESOodlPLtCoH6iTe+oSEqX0hJYTAAxCIzy93HMfPYFzzCTekqB5yWRvs9/EyETkv47nVA78XvIzSSpMKXkFiHg0tYOC0LWxjrjReYG62YlgDM6WyBUfbB58YCezrmPg91MUHGC2zbjU5SO3hOuu6xT0+AYsm4Bk4q09+fJKaGP/HNl/w7bu44h0DBHvtPG5TFKWwnwGrRje1b1vllqN5Le8RHyfX/ccBXHGWEc3qh1xvIRKMKOxBw4n73ma1lYUu+2U32n/n8vsU6cC9Dej269+HVyf4PVbmKuk/1uGNDHgVc0v1mjoSWdbrS1FyjP9Pu2SqYB0K86+q2h8LSFB5DzQNxc9upWqKOA15LFkgj0mGW1EfHCrmODIxgYTRttWZ3I5J95mFYUw== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(376002)(396003)(136003)(39860400002)(346002)(230922051799003)(1800799009)(82310400011)(186009)(451199024)(64100799003)(40470700004)(36840700001)(46966006)(36756003)(40480700001)(40460700003)(110136005)(54906003)(6636002)(316002)(70586007)(70206006)(86362001)(82740400003)(7636003)(356005)(36860700001)(83380400001)(426003)(336012)(107886003)(26005)(1076003)(2616005)(6666004)(7696005)(8936002)(2906002)(478600001)(8676002)(41300700001)(5660300002)(47076005)(4326008)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2023 13:43:22.6821 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6dc55722-fd0f-4f8f-cd1a-08dbcf170805 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0001A108.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7361 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Expose vfio_pci_core_setup_barmap() to be used by drivers. This will let drivers to mmap a BAR and re-use it from both vfio and the driver when it's applicable. This API will be used in the next patches by the vfio/virtio coming driver. Signed-off-by: Yishai Hadas --- drivers/vfio/pci/vfio_pci_core.c | 25 +++++++++++++++++++++++++ drivers/vfio/pci/vfio_pci_rdwr.c | 28 ++-------------------------- include/linux/vfio_pci_core.h | 1 + 3 files changed, 28 insertions(+), 26 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index 1929103ee59a..ebea39836dd9 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -684,6 +684,31 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev) } EXPORT_SYMBOL_GPL(vfio_pci_core_disable); +int vfio_pci_core_setup_barmap(struct vfio_pci_core_device *vdev, int bar) +{ + struct pci_dev *pdev = vdev->pdev; + void __iomem *io; + int ret; + + if (vdev->barmap[bar]) + return 0; + + ret = pci_request_selected_regions(pdev, 1 << bar, "vfio"); + if (ret) + return ret; + + io = pci_iomap(pdev, bar, 0); + if (!io) { + pci_release_selected_regions(pdev, 1 << bar); + return -ENOMEM; + } + + vdev->barmap[bar] = io; + + return 0; +} +EXPORT_SYMBOL_GPL(vfio_pci_core_setup_barmap); + void vfio_pci_core_close_device(struct vfio_device *core_vdev) { struct vfio_pci_core_device *vdev = diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c index e27de61ac9fe..6f08b3ecbb89 100644 --- a/drivers/vfio/pci/vfio_pci_rdwr.c +++ b/drivers/vfio/pci/vfio_pci_rdwr.c @@ -200,30 +200,6 @@ static ssize_t do_io_rw(struct vfio_pci_core_device *vdev, bool test_mem, return done; } -static int vfio_pci_setup_barmap(struct vfio_pci_core_device *vdev, int bar) -{ - struct pci_dev *pdev = vdev->pdev; - int ret; - void __iomem *io; - - if (vdev->barmap[bar]) - return 0; - - ret = pci_request_selected_regions(pdev, 1 << bar, "vfio"); - if (ret) - return ret; - - io = pci_iomap(pdev, bar, 0); - if (!io) { - pci_release_selected_regions(pdev, 1 << bar); - return -ENOMEM; - } - - vdev->barmap[bar] = io; - - return 0; -} - ssize_t vfio_pci_bar_rw(struct vfio_pci_core_device *vdev, char __user *buf, size_t count, loff_t *ppos, bool iswrite) { @@ -262,7 +238,7 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_core_device *vdev, char __user *buf, } x_end = end; } else { - int ret = vfio_pci_setup_barmap(vdev, bar); + int ret = vfio_pci_core_setup_barmap(vdev, bar); if (ret) { done = ret; goto out; @@ -438,7 +414,7 @@ int vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset, return -EINVAL; #endif - ret = vfio_pci_setup_barmap(vdev, bar); + ret = vfio_pci_core_setup_barmap(vdev, bar); if (ret) return ret; diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 562e8754869d..67ac58e20e1d 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -127,6 +127,7 @@ int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf); int vfio_pci_core_enable(struct vfio_pci_core_device *vdev); void vfio_pci_core_disable(struct vfio_pci_core_device *vdev); void vfio_pci_core_finish_enable(struct vfio_pci_core_device *vdev); +int vfio_pci_core_setup_barmap(struct vfio_pci_core_device *vdev, int bar); pci_ers_result_t vfio_pci_core_aer_err_detected(struct pci_dev *pdev, pci_channel_state_t state); From patchwork Tue Oct 17 13:42:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 13425188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72134CDB482 for ; Tue, 17 Oct 2023 13:43:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343896AbjJQNnk (ORCPT ); Tue, 17 Oct 2023 09:43:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343910AbjJQNnj (ORCPT ); Tue, 17 Oct 2023 09:43:39 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2066.outbound.protection.outlook.com [40.107.243.66]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72F8CED for ; Tue, 17 Oct 2023 06:43:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kK6UEODU6iKJvNmHY+9uFeEIru0mP37xj6bh5neNX1zYyXWjYJuQc8vVDBdiIdG9Tvm7ZxrOYWDFL6+XSGma3XlYqirr1Hij8OLiG+p5nVkqJo9HPURxubdXjpS7fu44Rhr3all7S2pFJMFTtz6EH9f2R0nTGX5EkwYk0/bOWkwBe+NJSRiFa/R8KJ8AJ3V1yHJ/vGrqHfRtcaU8/zZRKssuZKhvBGSPjbv7d3wlfYo07wFYXkUKDxNh1osfXeHdykmJbghkU19IuQrkSAMbtw8fFbBGB2kdTy86+cJG/PtOeUJmChzLrcwoEfKbgzzRmN0vo5nyCor6b+L/nGaW+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4ueIy9hp+yuSDFUn+Cwq4VZ9wxfooOvfEfsfqTWEOdw=; b=V7He+MqMZMsPS3onkDXAv1wH2grmEuN7UfHrTtTIjxekEjVZ7mpFirEWKRI4GiiXsUxy9lFJ843COtGCLZzJP8EMdnY99L0QHzH0jo8OWiCsld4JcU4QU7Z2loPUa0hK06P++27TtJijDeMnASEg49+7zFcg0jDxVAbE5L0g4oqofNZR3kgKuxiYEdYM0D++mF/+6dHJ+UapziU5nDckPRda6zKUZ+mXXzoDSaEQSu8FB7tc6hsL4XbuUcChzhFiU5Wn3ChXI1PsK98uyEUGItWtlcCo0JX4tRPKKWR305Phff8JZvWLYxEJx0xRwBuOpEkuwf/MsfmAEw7/dgVWxw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4ueIy9hp+yuSDFUn+Cwq4VZ9wxfooOvfEfsfqTWEOdw=; b=D/XTlSNdfHFS+2aitWUPZaXmyHo1gnJRL2joTRwMS6OEBzLmtwk3fmEmBh46sqPHKVobWmqFKSOjO1Qfyw0sjhznEUW7W3YOKA0IOxDeW0VxsDbOUp88UTKE4Q59Egv2lXPuWvWWebhaa1gkhTCdrl2fZesAKtVzTDPv8WBWzYnCXYhd7e3PXseGsn9vLxUsfO7NCLh6IoF//dt6k6mbdcIWGfzvBac59TDeZufXhgIFvjooypJi+4uPgb1uH0Lz3UnAp/JQkshzy4WqkKLeIICjR1s9YKtydsvapKHpY4/WV9UtscvK68S6MNxJwC5m7SdOZM60ZVic+LcsCPbXcA== Received: from SA0PR13CA0008.namprd13.prod.outlook.com (2603:10b6:806:130::13) by DS0PR12MB8366.namprd12.prod.outlook.com (2603:10b6:8:f9::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.35; Tue, 17 Oct 2023 13:43:32 +0000 Received: from SA2PEPF00001508.namprd04.prod.outlook.com (2603:10b6:806:130:cafe::fb) by SA0PR13CA0008.outlook.office365.com (2603:10b6:806:130::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.18 via Frontend Transport; Tue, 17 Oct 2023 13:43:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SA2PEPF00001508.mail.protection.outlook.com (10.167.242.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Tue, 17 Oct 2023 13:43:32 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:14 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:14 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.41 via Frontend Transport; Tue, 17 Oct 2023 06:43:10 -0700 From: Yishai Hadas To: , , , CC: , , , , , , , , , , Subject: [PATCH V1 vfio 8/9] vfio/pci: Expose vfio_pci_iowrite/read##size() Date: Tue, 17 Oct 2023 16:42:16 +0300 Message-ID: <20231017134217.82497-9-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20231017134217.82497-1-yishaih@nvidia.com> References: <20231017134217.82497-1-yishaih@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF00001508:EE_|DS0PR12MB8366:EE_ X-MS-Office365-Filtering-Correlation-Id: fe720352-7dab-450d-08b4-08dbcf170dc4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PzyYS0fZWDn0wWvaN5pP3ItCxewfXXZDFAcNapxF/GA85eXKWaoOAZsFw3GJgK8KM+uOGrj1n58LKHilVLYnr6D4cfk7TP8iAmDYqvprF7e6v8Tk5RiC9rygUVG7XGPSpqjskBD0mDRhp/Elqleh6G7hlhxtRJkc2Qbmi2WhvB+LbfZP3aIcbuyBvt2D3MoAn+Lyt9q5qGYV9N6lN33xzmayY1LGqgaFPC6HpBblTVE2fyM2KwU4KZcOxjnhCc/r5P19p8PNKE2CtXq5qPn7lnKhfSxG0pqbSEA+e4ZdIAAQtqbvkyCvg3NeW5RAefD/IZm57cdw2br+2ryZt0Xc9Ohw9lf2VF/jXWCAIXKmmt5B4qO7/llsvBWvexNHiyY/w72JSwWVz96n+FgJb6FdTrg33hzK2oEfA9NLr1Zre6qKq6FHcaks4Ii7Dl/BYMNhWnS4usOToJUbT7iKBj5rxDdxQ3i3eFvHFh8MPurXNyzacFgk00gAY1W/xiEoFwas2GDDG9dUh2BCueuWaqRbxdXkPA4EJI3OrDvYSvOjvn++TlWMeeWfqfuS6a/MaQiRb1Ltbpl1WJ6u7jD3jGDThqlExK9fmjJwHKURw0kzuFUzYnUl6VsbN3TN1iZ+XC/PztIMAXu9NIztv5ATmRy4E59TudEoRNCx92zrTvXhHZsqc39RAnaqeCGFCu/HpeFUC79afERdaIBdPhQERaGzrL2PE7Yg4crMDPNZrGZnBT+izGWGZTvt2UTCTcVKEm/PWrVGQIgerlkcEjqJf55Olw== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(346002)(39860400002)(396003)(376002)(136003)(230922051799003)(451199024)(64100799003)(82310400011)(186009)(1800799009)(36840700001)(40470700004)(46966006)(336012)(40480700001)(40460700003)(82740400003)(36756003)(356005)(47076005)(83380400001)(36860700001)(7636003)(26005)(7696005)(2906002)(54906003)(316002)(6636002)(70206006)(1076003)(426003)(2616005)(70586007)(478600001)(107886003)(110136005)(86362001)(41300700001)(8936002)(8676002)(4326008)(5660300002)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2023 13:43:32.3643 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fe720352-7dab-450d-08b4-08dbcf170dc4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF00001508.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8366 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Expose vfio_pci_iowrite/read##size() to let it be used by drivers. This functionality is needed to enable direct access to some physical BAR of the device with the proper locks/checks in place. The next patches from this series will use this functionality on a data path flow when a direct access to the BAR is needed. Signed-off-by: Yishai Hadas --- drivers/vfio/pci/vfio_pci_rdwr.c | 10 ++++++---- include/linux/vfio_pci_core.h | 19 +++++++++++++++++++ 2 files changed, 25 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c index 6f08b3ecbb89..817ec9a89123 100644 --- a/drivers/vfio/pci/vfio_pci_rdwr.c +++ b/drivers/vfio/pci/vfio_pci_rdwr.c @@ -38,7 +38,7 @@ #define vfio_iowrite8 iowrite8 #define VFIO_IOWRITE(size) \ -static int vfio_pci_iowrite##size(struct vfio_pci_core_device *vdev, \ +int vfio_pci_iowrite##size(struct vfio_pci_core_device *vdev, \ bool test_mem, u##size val, void __iomem *io) \ { \ if (test_mem) { \ @@ -55,7 +55,8 @@ static int vfio_pci_iowrite##size(struct vfio_pci_core_device *vdev, \ up_read(&vdev->memory_lock); \ \ return 0; \ -} +} \ +EXPORT_SYMBOL_GPL(vfio_pci_iowrite##size); VFIO_IOWRITE(8) VFIO_IOWRITE(16) @@ -65,7 +66,7 @@ VFIO_IOWRITE(64) #endif #define VFIO_IOREAD(size) \ -static int vfio_pci_ioread##size(struct vfio_pci_core_device *vdev, \ +int vfio_pci_ioread##size(struct vfio_pci_core_device *vdev, \ bool test_mem, u##size *val, void __iomem *io) \ { \ if (test_mem) { \ @@ -82,7 +83,8 @@ static int vfio_pci_ioread##size(struct vfio_pci_core_device *vdev, \ up_read(&vdev->memory_lock); \ \ return 0; \ -} +} \ +EXPORT_SYMBOL_GPL(vfio_pci_ioread##size); VFIO_IOREAD(8) VFIO_IOREAD(16) diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 67ac58e20e1d..22c915317788 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -131,4 +131,23 @@ int vfio_pci_core_setup_barmap(struct vfio_pci_core_device *vdev, int bar); pci_ers_result_t vfio_pci_core_aer_err_detected(struct pci_dev *pdev, pci_channel_state_t state); +#define VFIO_IOWRITE_DECLATION(size) \ +int vfio_pci_iowrite##size(struct vfio_pci_core_device *vdev, \ + bool test_mem, u##size val, void __iomem *io); + +VFIO_IOWRITE_DECLATION(8) +VFIO_IOWRITE_DECLATION(16) +VFIO_IOWRITE_DECLATION(32) +#ifdef iowrite64 +VFIO_IOWRITE_DECLATION(64) +#endif + +#define VFIO_IOREAD_DECLATION(size) \ +int vfio_pci_ioread##size(struct vfio_pci_core_device *vdev, \ + bool test_mem, u##size *val, void __iomem *io); + +VFIO_IOREAD_DECLATION(8) +VFIO_IOREAD_DECLATION(16) +VFIO_IOREAD_DECLATION(32) + #endif /* VFIO_PCI_CORE_H */ From patchwork Tue Oct 17 13:42:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 13425189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D6E0CDB484 for ; Tue, 17 Oct 2023 13:43:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343642AbjJQNnn (ORCPT ); Tue, 17 Oct 2023 09:43:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343961AbjJQNnk (ORCPT ); Tue, 17 Oct 2023 09:43:40 -0400 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2070.outbound.protection.outlook.com [40.107.92.70]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13F24118 for ; Tue, 17 Oct 2023 06:43:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=myajaJsn50ETsaoWZGGoSo+uxXInGOfQ13VJx4UWjeYwoyYGvRtDX/PKIt6xN6AzvPi8CyRRZFodH1QUebXZqVcAtsJEm9hFifG8RpItjExKu/mZElq07LElPXCKtzLjltD/4h2AZqYDm/oPz40Jw9S7MeAJY67fcO2qyXFGqMM/GZtucNbxbE+fB8TH5gp/M1HyLKWjfng5IoCraJbyRbB8oOwXRkoeRjcHpD5FKcQLMjrWyohiGG+cw1mOoo5auL8HsKTG6k9FyphYGKIDj2XomfAfEsicX4wM9jPTStjojkW9DS+MJOBWOkgDyM99uMfKrC/2KhblqjQJRszdyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MwdW+MxM7t/b6p+Kgz2QHjd8DGvwXtkbhnubHMzLv/w=; b=GNf7bO2fuQtT7mWUMH7GQ+3259RE17WUqA5cjNr3sV1ANuSBDM/F6z9zsF2Yuv60Pemc1n4WKk0gORvs1qsqfHA644AoKX3HFBlje02FLpP49KCqcbiSx5m3vlQgaLf5wIBQ6sqXZUBF8jHoukmFFUXPi4tfLZuL9FFR8T2s4vS/u1LLcK/wAPszLEfmcjw9knG0nu+PeUtqOHNIIe5NNsYbKYM6wWkEuMG5KE++iWQw0wgzLdztx+ZbXu9zKtaABgflPYH/m77OGGoLFD/YUpVejRstDXa7WQceV/A8lEqMDTDE4ET1zjvm3+JudX4oX5G+XedrdCjVp8USdD75fg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MwdW+MxM7t/b6p+Kgz2QHjd8DGvwXtkbhnubHMzLv/w=; b=XD2rZWO+umVIEEy/C1OZxEmIr+SX8ltrF43jPIAc0aJWQUC1XzU5SfHC+1r1mYnyXsmA8EeFUr3EKhAy/WpAyU9GHnIkW2MXp+ZWKbuT2qPOIMYg7hNJQFHQDGT3PTjhh6XeAd49h9f0dblMShMtiOIclf71YkUIL9q32O4lr0UdqIJArrGlBbgI9mahnxUlzv5l0BefI9egxGsHmAALncOyCIxbsAISqDI6PdR/E9F6EKg0nFwM3w7HQ4yfpshtUJck89K/ahkCqvH2BAmVjfn10ti27s6+KThgn4oYgQYJZW33SLMceIo6MA6TH/3gtba7ao+RwiY+re8P52fPQw== Received: from SA9PR13CA0026.namprd13.prod.outlook.com (2603:10b6:806:21::31) by SJ2PR12MB7917.namprd12.prod.outlook.com (2603:10b6:a03:4cc::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.36; Tue, 17 Oct 2023 13:43:33 +0000 Received: from SN1PEPF0002529E.namprd05.prod.outlook.com (2603:10b6:806:21:cafe::f0) by SA9PR13CA0026.outlook.office365.com (2603:10b6:806:21::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.18 via Frontend Transport; Tue, 17 Oct 2023 13:43:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SN1PEPF0002529E.mail.protection.outlook.com (10.167.242.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Tue, 17 Oct 2023 13:43:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:18 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 06:43:18 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.41 via Frontend Transport; Tue, 17 Oct 2023 06:43:14 -0700 From: Yishai Hadas To: , , , CC: , , , , , , , , , , Subject: [PATCH V1 vfio 9/9] vfio/virtio: Introduce a vfio driver over virtio devices Date: Tue, 17 Oct 2023 16:42:17 +0300 Message-ID: <20231017134217.82497-10-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20231017134217.82497-1-yishaih@nvidia.com> References: <20231017134217.82497-1-yishaih@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002529E:EE_|SJ2PR12MB7917:EE_ X-MS-Office365-Filtering-Correlation-Id: 53ef4151-a90a-4bbd-24bf-08dbcf170e0a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: uftlY3JQy2wFeQ/4y3c8JuzDzJ9gJFmy4M61Lo9FedJ5Zc74ZxZhY/SXY97nT+PkXMv1Z/WT07umIzbm6SyO5hiDED+DDyTmqRQ4lNrlrpNDlwZfhmUCWNU4JYN7+W3DorBRhaL2b21DJofDCPhQw4iEmNPwAdl1/r3rRK7dIkrJDx5pv5yg4vUApIkt0QiJLqsygk5DBzKE68Db5AEODc5+QiV0N0W9ADpRfDbBxUZ1+mzmEZhgO7ZXEK9Jbazuvgs8+jODubi68AmLpVorCoECd81zKmEHkA1kDR/MDklrygpc8KXirNjMSylaLPGNFwrLqevbcRoU1Q3fkmbPFUv3+grhyH+KYybuBBcOtOFILG77ib87HJFt03MawVA4YrmkV9l4nUCvefyOLH6OM9FvWvBJsFhgxhdGYZPfdiipWk4g4P6CkOkkesmOOktf1EumSm314C2Ti4tztOwUOLgfEruKdQrvG8kQRDEeaqjW7xqbS/+4dI52/JIXe7VGy0sWsaeinmdPU8q5UgjVeyH7X9kNZXfw1MbW4+xTZ87Vaw1EcT/pYizyj0pw4nU0ou0uY9+VGzc7HW1d2FEoTjLtyRsConvOp8N87Bz/krz7ofHSVILWJZk3H3QfBJLoE7oH+axHaT2emKM0KLYrke/SHYhHjoMw0l6sAqeJ4RubrvswJ1nUcE8y/+nV6s8Zezid/lypDho5pV0CmhzKLLFOk78rMQVh2B/ODnq8bzxYLSF8yBExyjU9TgjEmXOHoAvc4PE3z8LeeEkM5Ddcgomeo/dnJ9k0bVB85yllLi/RNYvew0z9u8nf38zyW8/P X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230031)(4636009)(396003)(136003)(376002)(346002)(39860400002)(230922051799003)(82310400011)(186009)(451199024)(1800799009)(64100799003)(46966006)(36840700001)(40470700004)(40460700003)(26005)(2616005)(426003)(336012)(1076003)(110136005)(7696005)(47076005)(36860700001)(107886003)(83380400001)(41300700001)(2906002)(30864003)(4326008)(8676002)(8936002)(478600001)(966005)(5660300002)(54906003)(70206006)(70586007)(316002)(6636002)(356005)(7636003)(40480700001)(82740400003)(36756003)(86362001)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2023 13:43:32.8394 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 53ef4151-a90a-4bbd-24bf-08dbcf170e0a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002529E.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB7917 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce a vfio driver over virtio devices to support the legacy interface functionality for VFs. Background, from the virtio spec [1]. -------------------------------------------------------------------- In some systems, there is a need to support a virtio legacy driver with a device that does not directly support the legacy interface. In such scenarios, a group owner device can provide the legacy interface functionality for the group member devices. The driver of the owner device can then access the legacy interface of a member device on behalf of the legacy member device driver. For example, with the SR-IOV group type, group members (VFs) can not present the legacy interface in an I/O BAR in BAR0 as expected by the legacy pci driver. If the legacy driver is running inside a virtual machine, the hypervisor executing the virtual machine can present a virtual device with an I/O BAR in BAR0. The hypervisor intercepts the legacy driver accesses to this I/O BAR and forwards them to the group owner device (PF) using group administration commands. -------------------------------------------------------------------- Specifically, this driver adds support for a virtio-net VF to be exposed as a transitional device to a guest driver and allows the legacy IO BAR functionality on top. This allows a VM which uses a legacy virtio-net driver in the guest to work transparently over a VF which its driver in the host is that new driver. The driver can be extended easily to support some other types of virtio devices (e.g virtio-blk), by adding in a few places the specific type properties as was done for virtio-net. For now, only the virtio-net use case was tested and as such we introduce the support only for such a device. Practically, Upon probing a VF for a virtio-net device, in case its PF supports legacy access over the virtio admin commands and the VF doesn't have BAR 0, we set some specific 'vfio_device_ops' to be able to simulate in SW a transitional device with I/O BAR in BAR 0. The existence of the simulated I/O bar is reported later on by overwriting the VFIO_DEVICE_GET_REGION_INFO command and the device exposes itself as a transitional device by overwriting some properties upon reading its config space. Once we report the existence of I/O BAR as BAR 0 a legacy driver in the guest may use it via read/write calls according to the virtio specification. Any read/write towards the control parts of the BAR will be captured by the new driver and will be translated into admin commands towards the device. Any data path read/write access (i.e. virtio driver notifications) will be forwarded to the physical BAR which its properties were supplied by the admin command VIRTIO_ADMIN_CMD_LEGACY_NOTIFY_INFO upon the probing/init flow. With that code in place a legacy driver in the guest has the look and feel as if having a transitional device with legacy support for both its control and data path flows. [1] https://github.com/oasis-tcs/virtio-spec/commit/03c2d32e5093ca9f2a17797242fbef88efe94b8c Signed-off-by: Yishai Hadas --- MAINTAINERS | 7 + drivers/vfio/pci/Kconfig | 2 + drivers/vfio/pci/Makefile | 2 + drivers/vfio/pci/virtio/Kconfig | 15 + drivers/vfio/pci/virtio/Makefile | 4 + drivers/vfio/pci/virtio/main.c | 577 +++++++++++++++++++++++++++++++ 6 files changed, 607 insertions(+) create mode 100644 drivers/vfio/pci/virtio/Kconfig create mode 100644 drivers/vfio/pci/virtio/Makefile create mode 100644 drivers/vfio/pci/virtio/main.c diff --git a/MAINTAINERS b/MAINTAINERS index 7a7bd8bd80e9..680a70063775 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -22620,6 +22620,13 @@ L: kvm@vger.kernel.org S: Maintained F: drivers/vfio/pci/mlx5/ +VFIO VIRTIO PCI DRIVER +M: Yishai Hadas +L: kvm@vger.kernel.org +L: virtualization@lists.linux-foundation.org +S: Maintained +F: drivers/vfio/pci/virtio + VFIO PCI DEVICE SPECIFIC DRIVERS R: Jason Gunthorpe R: Yishai Hadas diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig index 8125e5f37832..18c397df566d 100644 --- a/drivers/vfio/pci/Kconfig +++ b/drivers/vfio/pci/Kconfig @@ -65,4 +65,6 @@ source "drivers/vfio/pci/hisilicon/Kconfig" source "drivers/vfio/pci/pds/Kconfig" +source "drivers/vfio/pci/virtio/Kconfig" + endmenu diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile index 45167be462d8..046139a4eca5 100644 --- a/drivers/vfio/pci/Makefile +++ b/drivers/vfio/pci/Makefile @@ -13,3 +13,5 @@ obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5/ obj-$(CONFIG_HISI_ACC_VFIO_PCI) += hisilicon/ obj-$(CONFIG_PDS_VFIO_PCI) += pds/ + +obj-$(CONFIG_VIRTIO_VFIO_PCI) += virtio/ diff --git a/drivers/vfio/pci/virtio/Kconfig b/drivers/vfio/pci/virtio/Kconfig new file mode 100644 index 000000000000..89eddce8b1bd --- /dev/null +++ b/drivers/vfio/pci/virtio/Kconfig @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: GPL-2.0-only +config VIRTIO_VFIO_PCI + tristate "VFIO support for VIRTIO PCI devices" + depends on VIRTIO_PCI + select VFIO_PCI_CORE + help + This provides support for exposing VIRTIO VF devices using the VFIO + framework that can work with a legacy virtio driver in the guest. + Based on PCIe spec, VFs do not support I/O Space; thus, VF BARs shall + not indicate I/O Space. + As of that this driver emulated I/O BAR in software to let a VF be + seen as a transitional device in the guest and let it work with + a legacy driver. + + If you don't know what to do here, say N. diff --git a/drivers/vfio/pci/virtio/Makefile b/drivers/vfio/pci/virtio/Makefile new file mode 100644 index 000000000000..2039b39fb723 --- /dev/null +++ b/drivers/vfio/pci/virtio/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_VIRTIO_VFIO_PCI) += virtio-vfio-pci.o +virtio-vfio-pci-y := main.o + diff --git a/drivers/vfio/pci/virtio/main.c b/drivers/vfio/pci/virtio/main.c new file mode 100644 index 000000000000..3fef4b21f7e6 --- /dev/null +++ b/drivers/vfio/pci/virtio/main.c @@ -0,0 +1,577 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct virtiovf_pci_core_device { + struct vfio_pci_core_device core_device; + u8 bar0_virtual_buf_size; + u8 *bar0_virtual_buf; + /* synchronize access to the virtual buf */ + struct mutex bar_mutex; + void __iomem *notify_addr; + u32 notify_offset; + u8 notify_bar; + u16 pci_cmd; + u16 msix_ctrl; +}; + +static int +virtiovf_issue_legacy_rw_cmd(struct virtiovf_pci_core_device *virtvdev, + loff_t pos, char __user *buf, + size_t count, bool read) +{ + bool msix_enabled = virtvdev->msix_ctrl & PCI_MSIX_FLAGS_ENABLE; + struct pci_dev *pdev = virtvdev->core_device.pdev; + u8 *bar0_buf = virtvdev->bar0_virtual_buf; + u16 opcode; + int ret; + + mutex_lock(&virtvdev->bar_mutex); + if (read) { + opcode = (pos < VIRTIO_PCI_CONFIG_OFF(msix_enabled)) ? + VIRTIO_ADMIN_CMD_LEGACY_COMMON_CFG_READ : + VIRTIO_ADMIN_CMD_LEGACY_DEV_CFG_READ; + ret = virtio_pci_admin_legacy_io_read(pdev, opcode, pos, count, + bar0_buf + pos); + if (ret) + goto out; + if (copy_to_user(buf, bar0_buf + pos, count)) + ret = -EFAULT; + goto out; + } + + if (copy_from_user(bar0_buf + pos, buf, count)) { + ret = -EFAULT; + goto out; + } + + opcode = (pos < VIRTIO_PCI_CONFIG_OFF(msix_enabled)) ? + VIRTIO_ADMIN_CMD_LEGACY_COMMON_CFG_WRITE : + VIRTIO_ADMIN_CMD_LEGACY_DEV_CFG_WRITE; + ret = virtio_pci_admin_legacy_io_write(pdev, opcode, pos, count, + bar0_buf + pos); +out: + mutex_unlock(&virtvdev->bar_mutex); + return ret; +} + +static int +translate_io_bar_to_mem_bar(struct virtiovf_pci_core_device *virtvdev, + loff_t pos, char __user *buf, + size_t count, bool read) +{ + struct vfio_pci_core_device *core_device = &virtvdev->core_device; + u16 queue_notify; + int ret; + + if (pos + count > virtvdev->bar0_virtual_buf_size) + return -EINVAL; + + switch (pos) { + case VIRTIO_PCI_QUEUE_NOTIFY: + if (count != sizeof(queue_notify)) + return -EINVAL; + if (read) { + ret = vfio_pci_ioread16(core_device, true, &queue_notify, + virtvdev->notify_addr); + if (ret) + return ret; + if (copy_to_user(buf, &queue_notify, + sizeof(queue_notify))) + return -EFAULT; + break; + } + + if (copy_from_user(&queue_notify, buf, count)) + return -EFAULT; + + ret = vfio_pci_iowrite16(core_device, true, queue_notify, + virtvdev->notify_addr); + break; + default: + ret = virtiovf_issue_legacy_rw_cmd(virtvdev, pos, buf, count, + read); + } + + return ret ? ret : count; +} + +static bool range_intersect_range(loff_t range1_start, size_t count1, + loff_t range2_start, size_t count2, + loff_t *start_offset, + size_t *intersect_count, + size_t *register_offset) +{ + if (range1_start <= range2_start && + range1_start + count1 > range2_start) { + *start_offset = range2_start - range1_start; + *intersect_count = min_t(size_t, count2, + range1_start + count1 - range2_start); + if (register_offset) + *register_offset = 0; + return true; + } + + if (range1_start > range2_start && + range1_start < range2_start + count2) { + *start_offset = range1_start; + *intersect_count = min_t(size_t, count1, + range2_start + count2 - range1_start); + if (register_offset) + *register_offset = range1_start - range2_start; + return true; + } + + return false; +} + +static ssize_t virtiovf_pci_read_config(struct vfio_device *core_vdev, + char __user *buf, size_t count, + loff_t *ppos) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; + size_t register_offset; + loff_t copy_offset; + size_t copy_count; + __le32 val32; + __le16 val16; + u8 val8; + int ret; + + ret = vfio_pci_core_read(core_vdev, buf, count, ppos); + if (ret < 0) + return ret; + + if (range_intersect_range(pos, count, PCI_DEVICE_ID, sizeof(val16), + ©_offset, ©_count, NULL)) { + val16 = cpu_to_le16(0x1000); + if (copy_to_user(buf + copy_offset, &val16, copy_count)) + return -EFAULT; + } + + if ((virtvdev->pci_cmd & PCI_COMMAND_IO) && + range_intersect_range(pos, count, PCI_COMMAND, sizeof(val16), + ©_offset, ©_count, ®ister_offset)) { + if (copy_from_user((void *)&val16 + register_offset, buf + copy_offset, + copy_count)) + return -EFAULT; + val16 |= cpu_to_le16(PCI_COMMAND_IO); + if (copy_to_user(buf + copy_offset, (void *)&val16 + register_offset, + copy_count)) + return -EFAULT; + } + + if (range_intersect_range(pos, count, PCI_REVISION_ID, sizeof(val8), + ©_offset, ©_count, NULL)) { + /* Transional needs to have revision 0 */ + val8 = 0; + if (copy_to_user(buf + copy_offset, &val8, copy_count)) + return -EFAULT; + } + + if (range_intersect_range(pos, count, PCI_BASE_ADDRESS_0, sizeof(val32), + ©_offset, ©_count, NULL)) { + val32 = cpu_to_le32(PCI_BASE_ADDRESS_SPACE_IO); + if (copy_to_user(buf + copy_offset, &val32, copy_count)) + return -EFAULT; + } + + if (range_intersect_range(pos, count, PCI_SUBSYSTEM_ID, sizeof(val16), + ©_offset, ©_count, NULL)) { + /* + * Transitional devices use the PCI subsystem device id as + * virtio device id, same as legacy driver always did. + */ + val16 = cpu_to_le16(VIRTIO_ID_NET); + if (copy_to_user(buf + copy_offset, &val16, copy_count)) + return -EFAULT; + } + + return count; +} + +static ssize_t +virtiovf_pci_core_read(struct vfio_device *core_vdev, char __user *buf, + size_t count, loff_t *ppos) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + struct pci_dev *pdev = virtvdev->core_device.pdev; + unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos); + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; + int ret; + + if (!count) + return 0; + + if (index == VFIO_PCI_CONFIG_REGION_INDEX) + return virtiovf_pci_read_config(core_vdev, buf, count, ppos); + + if (index != VFIO_PCI_BAR0_REGION_INDEX) + return vfio_pci_core_read(core_vdev, buf, count, ppos); + + ret = pm_runtime_resume_and_get(&pdev->dev); + if (ret) { + pci_info_ratelimited(pdev, "runtime resume failed %d\n", + ret); + return -EIO; + } + + ret = translate_io_bar_to_mem_bar(virtvdev, pos, buf, count, true); + pm_runtime_put(&pdev->dev); + return ret; +} + +static ssize_t +virtiovf_pci_core_write(struct vfio_device *core_vdev, const char __user *buf, + size_t count, loff_t *ppos) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + struct pci_dev *pdev = virtvdev->core_device.pdev; + unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos); + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; + int ret; + + if (!count) + return 0; + + if (index == VFIO_PCI_CONFIG_REGION_INDEX) { + size_t register_offset; + loff_t copy_offset; + size_t copy_count; + + if (range_intersect_range(pos, count, PCI_COMMAND, sizeof(virtvdev->pci_cmd), + ©_offset, ©_count, + ®ister_offset)) { + if (copy_from_user((void *)&virtvdev->pci_cmd + register_offset, + buf + copy_offset, + copy_count)) + return -EFAULT; + } + + if (range_intersect_range(pos, count, pdev->msix_cap + PCI_MSIX_FLAGS, + sizeof(virtvdev->msix_ctrl), + ©_offset, ©_count, + ®ister_offset)) { + if (copy_from_user((void *)&virtvdev->msix_ctrl + register_offset, + buf + copy_offset, + copy_count)) + return -EFAULT; + } + } + + if (index != VFIO_PCI_BAR0_REGION_INDEX) + return vfio_pci_core_write(core_vdev, buf, count, ppos); + + ret = pm_runtime_resume_and_get(&pdev->dev); + if (ret) { + pci_info_ratelimited(pdev, "runtime resume failed %d\n", ret); + return -EIO; + } + + ret = translate_io_bar_to_mem_bar(virtvdev, pos, (char __user *)buf, count, false); + pm_runtime_put(&pdev->dev); + return ret; +} + +static int +virtiovf_pci_ioctl_get_region_info(struct vfio_device *core_vdev, + unsigned int cmd, unsigned long arg) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + unsigned long minsz = offsetofend(struct vfio_region_info, offset); + void __user *uarg = (void __user *)arg; + struct vfio_region_info info = {}; + + if (copy_from_user(&info, uarg, minsz)) + return -EFAULT; + + if (info.argsz < minsz) + return -EINVAL; + + switch (info.index) { + case VFIO_PCI_BAR0_REGION_INDEX: + info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index); + info.size = virtvdev->bar0_virtual_buf_size; + info.flags = VFIO_REGION_INFO_FLAG_READ | + VFIO_REGION_INFO_FLAG_WRITE; + return copy_to_user(uarg, &info, minsz) ? -EFAULT : 0; + default: + return vfio_pci_core_ioctl(core_vdev, cmd, arg); + } +} + +static long +virtiovf_vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, + unsigned long arg) +{ + switch (cmd) { + case VFIO_DEVICE_GET_REGION_INFO: + return virtiovf_pci_ioctl_get_region_info(core_vdev, cmd, arg); + default: + return vfio_pci_core_ioctl(core_vdev, cmd, arg); + } +} + +static int +virtiovf_set_notify_addr(struct virtiovf_pci_core_device *virtvdev) +{ + struct vfio_pci_core_device *core_device = &virtvdev->core_device; + int ret; + + /* + * Setup the BAR where the 'notify' exists to be used by vfio as well + * This will let us mmap it only once and use it when needed. + */ + ret = vfio_pci_core_setup_barmap(core_device, + virtvdev->notify_bar); + if (ret) + return ret; + + virtvdev->notify_addr = core_device->barmap[virtvdev->notify_bar] + + virtvdev->notify_offset; + return 0; +} + +static int virtiovf_pci_open_device(struct vfio_device *core_vdev) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + struct vfio_pci_core_device *vdev = &virtvdev->core_device; + int ret; + + ret = vfio_pci_core_enable(vdev); + if (ret) + return ret; + + if (virtvdev->bar0_virtual_buf) { + /* + * Upon close_device() the vfio_pci_core_disable() is called + * and will close all the previous mmaps, so it seems that the + * valid life cycle for the 'notify' addr is per open/close. + */ + ret = virtiovf_set_notify_addr(virtvdev); + if (ret) { + vfio_pci_core_disable(vdev); + return ret; + } + } + + vfio_pci_core_finish_enable(vdev); + return 0; +} + +static int virtiovf_get_device_config_size(unsigned short device) +{ + /* Network card */ + return offsetofend(struct virtio_net_config, status); +} + +static int virtiovf_read_notify_info(struct virtiovf_pci_core_device *virtvdev) +{ + u64 offset; + int ret; + u8 bar; + + ret = virtio_pci_admin_legacy_io_notify_info(virtvdev->core_device.pdev, + VIRTIO_ADMIN_CMD_NOTIFY_INFO_FLAGS_OWNER_MEM, + &bar, &offset); + if (ret) + return ret; + + virtvdev->notify_bar = bar; + virtvdev->notify_offset = offset; + return 0; +} + +static int virtiovf_pci_init_device(struct vfio_device *core_vdev) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + struct pci_dev *pdev; + int ret; + + ret = vfio_pci_core_init_dev(core_vdev); + if (ret) + return ret; + + pdev = virtvdev->core_device.pdev; + ret = virtiovf_read_notify_info(virtvdev); + if (ret) + return ret; + + /* Being ready with a buffer that supports MSIX */ + virtvdev->bar0_virtual_buf_size = VIRTIO_PCI_CONFIG_OFF(true) + + virtiovf_get_device_config_size(pdev->device); + virtvdev->bar0_virtual_buf = kzalloc(virtvdev->bar0_virtual_buf_size, + GFP_KERNEL); + if (!virtvdev->bar0_virtual_buf) + return -ENOMEM; + mutex_init(&virtvdev->bar_mutex); + return 0; +} + +static void virtiovf_pci_core_release_dev(struct vfio_device *core_vdev) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + + kfree(virtvdev->bar0_virtual_buf); + vfio_pci_core_release_dev(core_vdev); +} + +static const struct vfio_device_ops virtiovf_acc_vfio_pci_tran_ops = { + .name = "virtio-transitional-vfio-pci", + .init = virtiovf_pci_init_device, + .release = virtiovf_pci_core_release_dev, + .open_device = virtiovf_pci_open_device, + .close_device = vfio_pci_core_close_device, + .ioctl = virtiovf_vfio_pci_core_ioctl, + .read = virtiovf_pci_core_read, + .write = virtiovf_pci_core_write, + .mmap = vfio_pci_core_mmap, + .request = vfio_pci_core_request, + .match = vfio_pci_core_match, + .bind_iommufd = vfio_iommufd_physical_bind, + .unbind_iommufd = vfio_iommufd_physical_unbind, + .attach_ioas = vfio_iommufd_physical_attach_ioas, +}; + +static const struct vfio_device_ops virtiovf_acc_vfio_pci_ops = { + .name = "virtio-acc-vfio-pci", + .init = vfio_pci_core_init_dev, + .release = vfio_pci_core_release_dev, + .open_device = virtiovf_pci_open_device, + .close_device = vfio_pci_core_close_device, + .ioctl = vfio_pci_core_ioctl, + .device_feature = vfio_pci_core_ioctl_feature, + .read = vfio_pci_core_read, + .write = vfio_pci_core_write, + .mmap = vfio_pci_core_mmap, + .request = vfio_pci_core_request, + .match = vfio_pci_core_match, + .bind_iommufd = vfio_iommufd_physical_bind, + .unbind_iommufd = vfio_iommufd_physical_unbind, + .attach_ioas = vfio_iommufd_physical_attach_ioas, +}; + +static bool virtiovf_bar0_exists(struct pci_dev *pdev) +{ + struct resource *res = pdev->resource; + + return res->flags ? true : false; +} + +#define VIRTIOVF_USE_ADMIN_CMD_BITMAP \ + (BIT_ULL(VIRTIO_ADMIN_CMD_LIST_QUERY) | \ + BIT_ULL(VIRTIO_ADMIN_CMD_LIST_USE) | \ + BIT_ULL(VIRTIO_ADMIN_CMD_LEGACY_COMMON_CFG_WRITE) | \ + BIT_ULL(VIRTIO_ADMIN_CMD_LEGACY_COMMON_CFG_READ) | \ + BIT_ULL(VIRTIO_ADMIN_CMD_LEGACY_DEV_CFG_WRITE) | \ + BIT_ULL(VIRTIO_ADMIN_CMD_LEGACY_DEV_CFG_READ) | \ + BIT_ULL(VIRTIO_ADMIN_CMD_LEGACY_NOTIFY_INFO)) + +static bool virtiovf_support_legacy_access(struct pci_dev *pdev) +{ + int buf_size = DIV_ROUND_UP(VIRTIO_ADMIN_MAX_CMD_OPCODE, 64) * 8; + u8 *buf; + int ret; + + buf = kzalloc(buf_size, GFP_KERNEL); + if (!buf) + return false; + + ret = virtio_pci_admin_list_query(pdev, buf, buf_size); + if (ret) + goto end; + + if ((le64_to_cpup((__le64 *)buf) & VIRTIOVF_USE_ADMIN_CMD_BITMAP) != + VIRTIOVF_USE_ADMIN_CMD_BITMAP) { + ret = -EOPNOTSUPP; + goto end; + } + + /* Confirm the used commands */ + memset(buf, 0, buf_size); + *(__le64 *)buf = cpu_to_le64(VIRTIOVF_USE_ADMIN_CMD_BITMAP); + ret = virtio_pci_admin_list_use(pdev, buf, buf_size); +end: + kfree(buf); + return ret ? false : true; +} + +static int virtiovf_pci_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + const struct vfio_device_ops *ops = &virtiovf_acc_vfio_pci_ops; + struct virtiovf_pci_core_device *virtvdev; + int ret; + + if (pdev->is_virtfn && virtiovf_support_legacy_access(pdev) && + !virtiovf_bar0_exists(pdev) && pdev->msix_cap) + ops = &virtiovf_acc_vfio_pci_tran_ops; + + virtvdev = vfio_alloc_device(virtiovf_pci_core_device, core_device.vdev, + &pdev->dev, ops); + if (IS_ERR(virtvdev)) + return PTR_ERR(virtvdev); + + dev_set_drvdata(&pdev->dev, &virtvdev->core_device); + ret = vfio_pci_core_register_device(&virtvdev->core_device); + if (ret) + goto out; + return 0; +out: + vfio_put_device(&virtvdev->core_device.vdev); + return ret; +} + +static void virtiovf_pci_remove(struct pci_dev *pdev) +{ + struct virtiovf_pci_core_device *virtvdev = dev_get_drvdata(&pdev->dev); + + vfio_pci_core_unregister_device(&virtvdev->core_device); + vfio_put_device(&virtvdev->core_device.vdev); +} + +static const struct pci_device_id virtiovf_pci_table[] = { + /* Only virtio-net is supported/tested so far */ + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_REDHAT_QUMRANET, 0x1041) }, + {} +}; + +MODULE_DEVICE_TABLE(pci, virtiovf_pci_table); + +static struct pci_driver virtiovf_pci_driver = { + .name = KBUILD_MODNAME, + .id_table = virtiovf_pci_table, + .probe = virtiovf_pci_probe, + .remove = virtiovf_pci_remove, + .err_handler = &vfio_pci_core_err_handlers, + .driver_managed_dma = true, +}; + +module_pci_driver(virtiovf_pci_driver); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Yishai Hadas "); +MODULE_DESCRIPTION( + "VIRTIO VFIO PCI - User Level meta-driver for VIRTIO device family");