From patchwork Tue Aug 31 02:59:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8724BC43214 for ; Tue, 31 Aug 2021 03:08:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C2EF61004 for ; Tue, 31 Aug 2021 03:08:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239752AbhHaDJs (ORCPT ); Mon, 30 Aug 2021 23:09:48 -0400 Received: from mail-mw2nam10on2063.outbound.protection.outlook.com ([40.107.94.63]:1117 "EHLO NAM10-MW2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239469AbhHaDIa (ORCPT ); Mon, 30 Aug 2021 23:08:30 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Kq4Eg3QQR/y80Wnl1YFQq3uw0w1BERxc9J16Wdjre5aliTxUHChXnFq6PMGzt7PSm3x/c7AdNq0qki2wgiDZaLPeCHl4cK8FANHCxhJpFWHDwtfDGtowsxxRvAqgiommj4fknS7cMzxC5kKQnP5qYlQfRD0qvZzo6GVC7bClMHvvRT8pD8nyBGZg+FlDl6JQE0UatTVi4ZMlP4qE1jp7OtZBBHIKiMCSAx98bt7MdiZPPoOwv9R286zpBWwOqGT6dvetT2Irdd2p8VAB5zFE/f9rDdnrOwqyr3z+01HfUxQKdlvpQAHwPPW1t5Fn1sb0bGCqcsYw2bep+EulOqCAwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hRoWwVZ0qFjyy84HJrH7KnAp6D0Tim4zu0pbGXeP1U0=; b=Li60Y9CyWh/dEtAnKkr2VUjBYMIVNrvAGhJHq3U+l6XtcylsTWoni2Vhv8qV3MV20zuK8KkJfhYAilitr4pQ16uH9fx9JXV0/BvmtETkDcDSCfW/NwEYO3ki1vvrNTt0NWMY2Ye4WamBjRwq5oqcYwIMFLVIsYbhBpKiWTUOG14AZuMSnNU6WcQQP7U+HhCp6IEBNgR54R4Rqhk6YtDiLVFXcdtliKhkq8h10vPSWdzm0ia4NXmqD/upW8/XltbaifrV9M+THjOEBtH0exApdP3HGYelU2JTbSt8M5wU4Wz5mcMigiYbCrFJynAtFK5u0cNwlLkcUGlTwjN9Sq+lJA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hRoWwVZ0qFjyy84HJrH7KnAp6D0Tim4zu0pbGXeP1U0=; b=W4sUj1v/EGdu4mFXeCmwT6kFEFSxezJkRhm3n8Si5M1Ukl0ZpOmcQJUCZ5KsG2Hi622ySIAkAS4xhRwc2xKf+LXEfM+uPxmhE+LoWd1QCCJovZM95G2Z8ZWT3XXemzt3ZHihfpTfRlRmc6mnTmHzECl/iBpr2UwCFIJlNF1y0QaVf2Vf9YIQCtNffLhUUPjbmGdb7KLIswKHgL0kx17KtvZoKdpAzfb2bKRsXj3T3T2O7frY524M3i2e7qN9w+9x7gUqZMfFxYptH1h4WIBgMEFP2PBAk+goYvZ6C+HsA0SpHjAvuuD3hTfRF7VKzVhwXSmtmngyOSd6IGrZCCExsA== Received: from BN9P222CA0015.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::20) by SN6PR12MB4717.namprd12.prod.outlook.com (2603:10b6:805:e2::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.24; Tue, 31 Aug 2021 03:07:31 +0000 Received: from BN8NAM11FT025.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10c:cafe::c) by BN9P222CA0015.outlook.office365.com (2603:10b6:408:10c::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.20 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT025.mail.protection.outlook.com (10.13.177.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:30 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:30 +0000 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:30 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 01/13] iommu: Add set_nesting_vmid/get_nesting_vmid functions Date: Mon, 30 Aug 2021 19:59:11 -0700 Message-ID: <20210831025923.15812-2-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4956911e-727e-44dc-3606-08d96c2c78cc X-MS-TrafficTypeDiagnostic: SN6PR12MB4717: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: upqo3zLCzUfotU+WTyAYedipiOzp+sb9g8lHi/ujeplvDdrwTuPbEQExnehM1dTycNjwISUFaLbUrLxWSm+io0ubaQzE3nvZArGpjScw3ea8fXQ4fwqEubUbtnjN7GJFpfuksjfzgHHos01E4i2lunWyvhi78VTQROhqLAWaAKwUm1+wBpi7gCqSfda1cb68Yyf5jtrvs4M7fudaNDZdFeev+5MOOzm2pd2SBxxisdwBk/Aw0cAJv2lFFCLxn9SmbSQp6zQiNfFaKvklxHaoSHiIHszpSQVcf2UAcBcWEEVuq6jRoswF3YJE16zWUAs6T874bJ6xqLon+G+1Aakvj2ZZMHiwV39VE5mXx0phbbGPkjVhl9YusCOFn8TW6mxkuykmNhyJVXrom7zTzwFxljOs3kaXNxX4aPnSnuEBIdM8cT0B43qQTk1ND2riXnZsQZOSxyePL27COOPxMesetCpns6LCFwgL070mVnWoU6yespo+LjwdDpv0D16YyrlGuesSgF1pCxsTXCbl9C+ZPgMqSY7cXB1p/dVFa3Wv4Nq1Oqkx2T41p0Qu508qeZeWXtJ07cOvxei61cDCRvblGN5otFOEOaHMyz65zgDhitYljhEUx0TdTxTDXUsTo8mcfLcaF64trT1b0pBdUHXV5W1UOqra/L5YKux94U6K63Z/uKf7ThDrq4mxuj3sxgG9ynTFGoCLuQmk9L9VM6GYOxbszYWFK19ZSCTDgalIzkc= X-Forefront-Antispam-Report: CIP:216.228.112.35;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid04.nvidia.com;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(46966006)(36840700001)(5660300002)(36860700001)(336012)(426003)(2906002)(1076003)(83380400001)(186003)(2616005)(82740400003)(36756003)(86362001)(7696005)(478600001)(7636003)(82310400003)(8676002)(356005)(36906005)(26005)(54906003)(316002)(70206006)(7416002)(6666004)(110136005)(70586007)(47076005)(8936002)(4326008)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:30.8986 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4956911e-727e-44dc-3606-08d96c2c78cc X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.35];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT025.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR12MB4717 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org VMID stands for Virtual Machine Identifier, being used to tag TLB entries to indicate which VM they belong to. This is used by some IOMMU like SMMUv3 for virtualization case, in nesting mode. So this patch adds a pair of new iommu_ops callback functions with a pair of exported set/get functions to allow VFIO core to get access of the VMID value in an IOMMU driver. Signed-off-by: Nicolin Chen --- drivers/iommu/iommu.c | 20 ++++++++++++++++++++ include/linux/iommu.h | 5 +++++ 2 files changed, 25 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 3303d707bab4..051f2df36dc0 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2774,6 +2774,26 @@ int iommu_enable_nesting(struct iommu_domain *domain) } EXPORT_SYMBOL_GPL(iommu_enable_nesting); +int iommu_set_nesting_vmid(struct iommu_domain *domain, u32 vmid) +{ + if (domain->type != IOMMU_DOMAIN_UNMANAGED) + return -EINVAL; + if (!domain->ops->set_nesting_vmid) + return -EINVAL; + return domain->ops->set_nesting_vmid(domain, vmid); +} +EXPORT_SYMBOL_GPL(iommu_set_nesting_vmid); + +int iommu_get_nesting_vmid(struct iommu_domain *domain, u32 *vmid) +{ + if (domain->type != IOMMU_DOMAIN_UNMANAGED) + return -EINVAL; + if (!domain->ops->get_nesting_vmid) + return -EINVAL; + return domain->ops->get_nesting_vmid(domain, vmid); +} +EXPORT_SYMBOL_GPL(iommu_get_nesting_vmid); + int iommu_set_pgtable_quirks(struct iommu_domain *domain, unsigned long quirk) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index d2f3435e7d17..bda6b3450909 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -163,6 +163,7 @@ enum iommu_dev_features { }; #define IOMMU_PASID_INVALID (-1U) +#define IOMMU_VMID_INVALID (-1U) #ifdef CONFIG_IOMMU_API @@ -269,6 +270,8 @@ struct iommu_ops { void (*probe_finalize)(struct device *dev); struct iommu_group *(*device_group)(struct device *dev); int (*enable_nesting)(struct iommu_domain *domain); + int (*set_nesting_vmid)(struct iommu_domain *domain, u32 vmid); + int (*get_nesting_vmid)(struct iommu_domain *domain, u32 *vmid); int (*set_pgtable_quirks)(struct iommu_domain *domain, unsigned long quirks); @@ -500,6 +503,8 @@ extern int iommu_group_id(struct iommu_group *group); extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *); int iommu_enable_nesting(struct iommu_domain *domain); +int iommu_set_nesting_vmid(struct iommu_domain *domain, u32 vmid); +int iommu_get_nesting_vmid(struct iommu_domain *domain, u32 *vmid); int iommu_set_pgtable_quirks(struct iommu_domain *domain, unsigned long quirks); From patchwork Tue Aug 31 02:59:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2874DC41537 for ; Tue, 31 Aug 2021 03:08:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0710961004 for ; Tue, 31 Aug 2021 03:08:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239484AbhHaDIc (ORCPT ); Mon, 30 Aug 2021 23:08:32 -0400 Received: from mail-dm6nam10on2044.outbound.protection.outlook.com ([40.107.93.44]:12320 "EHLO NAM10-DM6-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S231944AbhHaDI2 (ORCPT ); Mon, 30 Aug 2021 23:08:28 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BrRLCdjAx7sXMidJrZtriRUfubCgLXKNHBQZfavXMXEtcfN7lOyRQFXdwiga9j0sl4ENxB+DwffBP4DE1ywB8CkufZfwPoO8la/9622qaP8orXbk+3Y4lgXyJVu4LRvyyy15wroYRBBI+IqoyVtRINcEtlFMmE9/G9mSqP3KC13L8lE9lQWtQsHvE03bNoQaKpUk4Qsy85iR2LuO/knqtt6GEsyEoYf7gSnFjzFH1llTLbArOBAIlOxIs6z4VzY7hw075NrB23LsbISqW7YZeR4OEaZSD0J5vENNuq059p37tTY+eY8uRFOKsZ05qd7zo8tH/hkYUBOwJJm0kSrvAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LPa02YAHGVtjdhyj7ZEuwlxcvKmyjH7cp6BNQPwjXK8=; b=f7+zNbGIdnqy6e8DgadauZuFUktR3QRnYMNBcdqwiK6AO5KDBHQnVMEDrksO74s2nH2blia+iogD72qcsTdMd59qiB3YGoI4gLaYtkBl+3L3BD+my0+pc3bjT2r9F2f4rA1qYHN1g/OUxB7t5RL+hHSk/W+7toM0AuIqOaFGYNjAIyQMU+7Z5/db2/BbZWZ8skzK7XBrAKd0GwjRQqLA+v1dOkvhRo5Y9qKNIXNd1V8rHkk4B28/vzuPnWYghwFV9gvDg+Lv1T9o09pdyiWprXKFFqZcZLaalpuHmiXA8dL/RA6umVhdJVu+OcGDbnAgRZIkYoRbbifhhElpGlZF1Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LPa02YAHGVtjdhyj7ZEuwlxcvKmyjH7cp6BNQPwjXK8=; b=es2mQuI2JsGozxbBjvHVqi+VeRpySpHEOWjzbDOd6VHzt26Cli45o2kuoLeFwR5FZ7gLNHSti3h9pvehnSdQhW7X4pKEVJUdedZY2IYWNbVfLNrsI+gNPoXc9WptYjIZ3gEDIv1eLPUACXthyPATbjlownVzr2nLBu+PzBXmHQ+EYq2rw/1Le/8smJ9o8hcMqaedj2sgGiCjhgpQS4kb36VSze9lTgvieiUVtjRxCaAkYEbijFJmPweshkaSRMgoSgAXZZ60UPHgqbttQvvwFoxu3zKWyAKRfm4PjBkZSoAXpXySfxSRHgaOvlqkvTTzDThF7q6TmySfCfWr03KX/g== Received: from BN6PR16CA0039.namprd16.prod.outlook.com (2603:10b6:405:14::25) by BY5PR12MB4856.namprd12.prod.outlook.com (2603:10b6:a03:1d5::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.20; Tue, 31 Aug 2021 03:07:32 +0000 Received: from BN8NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:405:14:cafe::7) by BN6PR16CA0039.outlook.office365.com (2603:10b6:405:14::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.18 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by BN8NAM11FT026.mail.protection.outlook.com (10.13.177.51) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 30 Aug 2021 20:07:30 -0700 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:30 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 02/13] vfio: add VFIO_IOMMU_GET_VMID and VFIO_IOMMU_SET_VMID Date: Mon, 30 Aug 2021 19:59:12 -0700 Message-ID: <20210831025923.15812-3-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d5c4edaf-da3f-4c55-83c9-08d96c2c7944 X-MS-TrafficTypeDiagnostic: BY5PR12MB4856: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:529; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8RbGzxNMkBSwDAvXLUmoG2qI039equ4+dWoQv2k/9DqimgGJrukr+Lc9bMdvJHxUWXVbiINe9Q3YfgAiLO7pSf47KwO5Vt+Twcc5ohBtPlHtT3KB65iBYyvr21b2POR4w/klETJe7uOBdQv3ufUwTPScXKOgkjV6eoNZsSMl43TDDdK7mBw3qVgKvfgmQ3ESwhgZIELvCtb9wUoWMvOQhuiaBD6SlfgTnjCpicd3UH4Fq3cmgxJ6/VekDUpd5a8isplpuJIURk4AXpMtX51M6KD1kxcJT4POU2O/dJB6NaWIXQ3cZet2KMseIgVN/zjQn4QW72Ih3yMKfCd1lzfHXeMtxXrFd76T6Voq6q8x9DJ4uZp61VIcqnZpBn3+oFiN2wYHGHw6RFTVPimFjyp9WJHvAqTwdJZge76mE9daNStjWnTXnf/3Gh/csyLkvuZpms80z1C7NIA0zzdcRf1su6bArRkIycanMzgq0qD7oxMcZZ6NJyfA8W0/9wsY1io1zOPLo4sQTHr6XGBvK3SCGpPWScFigqKQQ4+mGHFJhPrmupxmGsWEwwMxMQp/RQyn7HHmoukSTqw0qtx31JU0gUBTiWF6aHyjexCoLk7sGtaq80WFCudfjQvQO6DweFKqQ9O8DLmwIcZWt4g6VQ6BCBBd/ea11MFQbstmbtce5lZ6IoXb27zCZO6rdS3ZxvdSlsq2r+j2ybfgDHKKHehpXvGTm4vXhsCfXLQyAEVdadQ= X-Forefront-Antispam-Report: CIP:216.228.112.32;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid01.nvidia.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(136003)(39860400002)(36840700001)(46966006)(186003)(2906002)(8936002)(4326008)(426003)(86362001)(7416002)(478600001)(54906003)(26005)(110136005)(6666004)(336012)(1076003)(70586007)(7696005)(8676002)(70206006)(316002)(36756003)(7636003)(82740400003)(2616005)(82310400003)(36860700001)(5660300002)(47076005)(83380400001)(356005)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:31.6940 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d5c4edaf-da3f-4c55-83c9-08d96c2c7944 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.32];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4856 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds a pair of new ioctl commands to communicate with user space (virtual machine hypervisor) to get and set VMID that indicates a Virtual Machine Identifier, being used by some IOMMU to tag TLB entries -- similar to CPU MMU, using this VMID number allows IOMMU to invalidate at the same time TLBs of the same VM. Signed-off-by: Nicolin Chen --- drivers/vfio/vfio.c | 13 +++++++++++++ include/uapi/linux/vfio.h | 26 ++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 3c034fe14ccb..c17b25c127a2 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -59,6 +59,7 @@ struct vfio_container { struct rw_semaphore group_lock; struct vfio_iommu_driver *iommu_driver; void *iommu_data; + u32 vmid; bool noiommu; }; @@ -1190,6 +1191,16 @@ static long vfio_fops_unl_ioctl(struct file *filep, case VFIO_SET_IOMMU: ret = vfio_ioctl_set_iommu(container, arg); break; + case VFIO_IOMMU_GET_VMID: + ret = copy_to_user((void __user *)arg, &container->vmid, + sizeof(u32)) ? -EFAULT : 0; + break; + case VFIO_IOMMU_SET_VMID: + if ((u32)arg == VFIO_IOMMU_VMID_INVALID) + return -EINVAL; + container->vmid = (u32)arg; + ret = 0; + break; default: driver = container->iommu_driver; data = container->iommu_data; @@ -1213,6 +1224,8 @@ static int vfio_fops_open(struct inode *inode, struct file *filep) init_rwsem(&container->group_lock); kref_init(&container->kref); + container->vmid = VFIO_IOMMU_VMID_INVALID; + filep->private_data = container; return 0; diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index ef33ea002b0b..58c5fa6aaca6 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1216,6 +1216,32 @@ struct vfio_iommu_type1_dirty_bitmap_get { #define VFIO_IOMMU_DIRTY_PAGES _IO(VFIO_TYPE, VFIO_BASE + 17) +/** + * VFIO_IOMMU_GET_VMID - _IOWR(VFIO_TYPE, VFIO_BASE + 22, __u32 *vmid) + * VFIO_IOMMU_SET_VMID - _IOWR(VFIO_TYPE, VFIO_BASE + 23, __u32 vmid) + * + * IOCTLs are used for VMID alignment between Kernel and User Space hypervisor. + * In a virtualization use case, a guest owns the first stage translation, and + * the hypervisor owns the second stage translation. VMID is an Virtual Machine + * Identifier that is to tag TLB entries of a VM. If a VM has multiple physical + * devices being assigned to it, while these devices are under different IOMMU + * domains, the VMIDs in the second stage configurations of these IOMMU domains + * could be aligned to a unified VMID value. This could be achieved by using + * these two IOCTLs. + * + * Caller should get VMID upon its initial value when the first physical device + * is assigned to the VM. + * + * Caller then should set VMID to share the same VMID value with other physical + * devices being assigned to the same VM. + * + */ +#define VFIO_IOMMU_VMID_INVALID (-1U) + +#define VFIO_IOMMU_GET_VMID _IO(VFIO_TYPE, VFIO_BASE + 22) + +#define VFIO_IOMMU_SET_VMID _IO(VFIO_TYPE, VFIO_BASE + 23) + /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */ /* From patchwork Tue Aug 31 02:59:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40B10C43214 for ; Tue, 31 Aug 2021 03:07:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2680061004 for ; Tue, 31 Aug 2021 03:07:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239475AbhHaDI3 (ORCPT ); Mon, 30 Aug 2021 23:08:29 -0400 Received: from mail-dm3nam07on2044.outbound.protection.outlook.com ([40.107.95.44]:8161 "EHLO NAM02-DM3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229514AbhHaDI1 (ORCPT ); Mon, 30 Aug 2021 23:08:27 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dLmNvnYGR52qTm85T3EzwlmeRoVi0gNLoEDeMHTfOTTqQV19JNg4Db6TGmq+iTRg0J4pWoHLjVCnIfA8o/8PxR2OtbBJkVvtpCzYwCaMjz6Dszc050DDIeeaTyq6qyufpmBZBkHK4VppGGD2UoHjAAJ1iuxbldk/28RUxS9xfkAgWzOoEXtL2f2GnAlnV7kEmchjYxAv3xRJ+VyirvticPI1WajY1+V5h4XU9Tc5rJJ1ObRq4NZQAXXTafab73PSMAma/y7Bhczz4jaAy0nJUci3nq05Vg5DmDBUQRLPdyUrDavCkKrTRUtA+eg2Iih9E6KYZgGwmam+mrQvwQsgfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2Y1sZnggWLQTQCHMetWfg3D7bWv+PfU9Es3PjDXX9B4=; b=HDrGXsiwxEktcvBonjR/4jUPZaBHL9moHYxa22NxseTRzDtIl1Pf6Yv0OFtHu9/XlO33HKNghB5AqHJzbivywNeT6aV8nR+6nVstQEAGfAbDOJX0uplU3wjljCBvGbA0lbhQ7KZj5hy8G3i/plYs3a0WkvDYZS1JuGsBIOo+8jLeXzaNg/aEPiQIhnZfuqrXo7mZiZSIlm50NaTRCpBb3UbhOLJ/p0StckqApQnpQT1/rB2JvRXbVcF9XF9pWFPSBitfSgiZCZMlVtJG5EnL5Zj4MMYHAFCz4Qf96SAS7vadItRig/DTgTCNzlkDkDU51ZYrz/uIKmsGDRRk6wlvrA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2Y1sZnggWLQTQCHMetWfg3D7bWv+PfU9Es3PjDXX9B4=; b=ZRsc1tBCDAriHjHpRP6IVeui44Vt1ckReEfi2MBqmGIgtFNyU8oT7Da+xbkuiLaQ9S5v8bYvjk0/f/RJQdPL4KVewh90JyYkk2LSGhYZsxySeQyL5noBbrw0q5td7ce9MUcNryFolZWM5FQ8ZDoVutxR3NuKwGo6VZs1ieS9cbeFxyQnPdmnkrwqDAt1mu4afuQTHfotgqd3Bhz8a6/HQCNnGD3sM1aDiVwti1BsTCLJ3JfWeXMKD9DxfogBEA2utfFlbU6FlJtkDPGrKKewBTO0UyfqN2D+HrfwO0v0IYwDaP10XnTmfqbDn0QT+4y8NBBTOYj6Kp1YTHDla/27cQ== Received: from MWHPR07CA0003.namprd07.prod.outlook.com (2603:10b6:300:116::13) by DM6PR12MB3515.namprd12.prod.outlook.com (2603:10b6:5:15f::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.24; Tue, 31 Aug 2021 03:07:31 +0000 Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:300:116:cafe::8c) by MWHPR07CA0003.outlook.office365.com (2603:10b6:300:116::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.18 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:30 +0000 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:30 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 03/13] vfio: Document VMID control for IOMMU Virtualization Date: Mon, 30 Aug 2021 19:59:13 -0700 Message-ID: <20210831025923.15812-4-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fb41c4bf-b513-40e3-3292-08d96c2c78d9 X-MS-TrafficTypeDiagnostic: DM6PR12MB3515: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0SYAXcY+augGzMSqzNi6zVq5a9AW5XTjp6afjcWXlPvp08zwhuJwN/vRO99S3qKKMeXpULY5YS4VQ44bGtMaWffZ0GkmjpCpUNGISVQ725tEf17/brOq5dp4QzjrjGqoThzPJEohy3178lmKJUh9N6jo0G65RwNqlyDppcP1SDT7Sz+oPKtw1+AfEqznRgPdReTh3aL05QLdBxk+jad0Tnx80DrPnhuDS6V+aGmZla34KtXY7eBquJhnZAekfPBZxKlnwiBvyjtGP8Y0NaYg4/zJJclgzDoQr1eYoWNhndOJYxdTpEMXWdxsjUhkUAvpH7r5UIUIyZPnrM0FH0XTI/OvYBszSADW980elogKkMIzHehEZ19ozBFFeFqdlBnKn2T9hFTb6FbedpKQpp3q/5lLLeDnmSGSOCkIHuT3QRnbovF2PLSh5saqS7AeM6NjfpGOxSUyHd3MHvKNEO1AD28wPrak5Ue3VwAZ0RSEomu1DSxk0FQ0qxIpOXicNYTgosx63qelh/wsE0RFPAPM1L5Wf42dDtNEePFMOMAT2ufZAlW3ZfrUX8kt0wcdSbgTuj1Vzv2Jv9OQhma1xoZlAS2mmD+URT0RzxbnergfoceYTsYTdsbGiYr/r67UF1CjrA5tyWMgnmh90HiFKiDvm2pTXlh/IezfUzkIDx3EM6Vs5regcEUvEEeqqKsK16q85ZkgbSlRCnPAhRLvtbca4oQuo6MJ3e74r3zJgw78c8g= X-Forefront-Antispam-Report: CIP:216.228.112.36;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid05.nvidia.com;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(396003)(136003)(36840700001)(46966006)(8936002)(26005)(4326008)(36756003)(36860700001)(186003)(7416002)(336012)(47076005)(426003)(1076003)(2616005)(2906002)(83380400001)(6666004)(5660300002)(7696005)(54906003)(7636003)(8676002)(110136005)(82740400003)(70206006)(356005)(36906005)(70586007)(316002)(478600001)(82310400003)(86362001)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:31.0347 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fb41c4bf-b513-40e3-3292-08d96c2c78d9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.36];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3515 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The VFIO API was enhanced to support VMID control with two new iotcls to set and get VMID between the kernel and the virtual machine hypervisor. So updating the document. Signed-off-by: Nicolin Chen --- Documentation/driver-api/vfio.rst | 34 +++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst index c663b6f97825..a76a17065cdd 100644 --- a/Documentation/driver-api/vfio.rst +++ b/Documentation/driver-api/vfio.rst @@ -239,6 +239,40 @@ group and can access them as follows:: /* Gratuitous device reset and go... */ ioctl(device, VFIO_DEVICE_RESET); +IOMMU Virtual Machine Identifier (VMID) +------------------------ +In case of virtualization, each VM is assigned a Virtual Machine Identifier +(VMID). This VMID is used to tag translation lookaside buffer (TLB) entries, +to identify which VM each entry belongs to. This tagging allows translations +for multiple different VMs to be present in the TLBs at the same time. + +The IOMMU Kernel driver is responsible for allocating a VMID. However, only +a hypervisor knows what physical devices get assigned to the same VM. Thus, +when the first physical device gets assigned to the VM, once the hypervisor +finishes its IOCTL call of VFIO_SET_IOMMU, it should call the following: + +struct vm { + int iommu_type; + uint32_t vmid; /* initial value is VFIO_IOMMU_VMID_INVALID */ +} vm0; + + /* ... */ + ioctl(container->fd, VFIO_SET_IOMMU, vm0->iommu_type); + /* ... */ + if (vm0->vmid == VFIO_IOMMU_VMID_INVALID) + ioctl(container->fd, VFIO_IOMMU_GET_VMID, &vm0->vmid); + +This VMID would be the shared value, across the entire VM, between all the +physical devices that are assigned to it. So, when other physical devices +get assigned to the VM, before the hypervisor runs into the IOCTL call of +VFIO_IOMMU_SET_VMID, it should call the following: + + /* ... */ + ioctl(container->fd, VFIO_SET_IOMMU, vm0->iommu_type); + /* ... */ + if (vm0->vmid != VFIO_IOMMU_VMID_INVALID) + ioctl(container->fd, VFIO_IOMMU_SET_VMID, vmid); + VFIO User API ------------------------------------------------------------------------------- From patchwork Tue Aug 31 02:59:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F3F9C43214 for ; Tue, 31 Aug 2021 03:07:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 295666101C for ; Tue, 31 Aug 2021 03:07:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239548AbhHaDIj (ORCPT ); Mon, 30 Aug 2021 23:08:39 -0400 Received: from mail-co1nam11on2055.outbound.protection.outlook.com ([40.107.220.55]:48737 "EHLO NAM11-CO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232951AbhHaDI2 (ORCPT ); Mon, 30 Aug 2021 23:08:28 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Lt/Dqf4ZGw4mNUcxHygmyKnoSt0V3SQhsQ8coy6qd8ZioQN3LaUlbgQsrzE8kJTiXXAWDLMBrg4WIKZI6Gzphf0Cf6wEwLUtrcMUa0OGPs23P3sV7XReJAo3TmKUakwc8yDSiRsnwgeFYG5II3pLCHYmQFHB6qg8F70mMq6gsohuh+T/61oNFzZbxyiG+N1QJ5KNe0GyhzFCg4/bOQ+5Yeh+jrrzltNUZ/ZoA2ewpFfmFtMi1b7Kc4ereB6lYz8vyNERu/zrgbkqwuH66Res7Ohxig8Clm4R7bVfTao4uRVfaw5z4XcbfOUhl4n1iqpyki2d/gZgaysATV8WnBj8gA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wrMOqGxGurmsFmCdMz+G0sNDtvATVDXIHl4AkPZBxqM=; b=e6WUaC/wZdCzEBTNhcGOIAC7NHLrJ4fhbYOacWNh34fpo3Rv4PVoPGtfdsK8HWaUo+xH167kOhNc/TL28HmG2peDrhI3KPT23MsyKAI6hXS8J35Nifp6RG52pe7ABX/+7iq0/+fui/XwXeBvoDWa6dyu0EbQfFWEtgi4j+cMdnA6pcI8JpI1zU0kj25Mn/mx9BZTMWaExBXiLpPLZkOmY6TBy1ozdizwGtRau4sFFG+KQkq+BJ1xRpYg9HJ+otLPrqGgL9H9OV7iYMzZ4tksuzV5VO2O1a9VicQXLGCVvW56HzkElZnue6y8q0uNgIzKN4pBSIgQz+poTVTRUlePzA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wrMOqGxGurmsFmCdMz+G0sNDtvATVDXIHl4AkPZBxqM=; b=KBV1T2Un9RT5Y5MvFOnCfTslr+flu3vmv1plIacxuMSHvGGfA8NO+cf1NlJUO0OmD9g1ecl0b1PfR9tUgOg9sI1HU4K7RGGi9RPT+DFPv+SZxuAbZUm3frMjm0hS4EbkPp5VoeCKKHUiYZ7OJhmGi94EhDVvcU99LlN+KORoTHuEoNSai2124aLoVt2c3RoLjH6zWCHuS2EkQJSkApwl73Q3ajuuXP+izPsA0RBQAjPUpa5KA4JkPLrS5X4aErxpkUk5O/nIn8e5+EZNsXAFTnJQMewPZ/xhwm8MB8nkqbAy+lyRbTn7G4W6rfN9ja1MR04hiQCx+Lb8KaNOH/ZBkA== Received: from MWHPR07CA0014.namprd07.prod.outlook.com (2603:10b6:300:116::24) by DM5PR1201MB0091.namprd12.prod.outlook.com (2603:10b6:4:57::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.13; Tue, 31 Aug 2021 03:07:32 +0000 Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:300:116:cafe::50) by MWHPR07CA0014.outlook.office365.com (2603:10b6:300:116::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:30 +0000 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:30 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 04/13] vfio: add set_vmid and get_vmid for vfio_iommu_type1 Date: Mon, 30 Aug 2021 19:59:14 -0700 Message-ID: <20210831025923.15812-5-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9ec9e46b-e366-4cf8-f6b3-08d96c2c7941 X-MS-TrafficTypeDiagnostic: DM5PR1201MB0091: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3276; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: K3rTT76KiffEVcyLRNPGPBBgg+qEj2IpEQ0nv5wun9UlRtIWoagOlCbvCkjWEX8cUvvHSKkb1FlB84eo2or3sVLvk/hVPvuFedtcJjoocjTS19WSj4Y5NCdNna9zXRmKvwj2IsCjwZ6RtO02V1P68uzdbTe67ESy17dwAxaEhxNS9fZPus8gDDpMbubp70S4H0fmqlOH72Av/vuWcuYecC+ke1hcHkRf8OrlRjeswtxu49liquwUvXujmddx539cyYEoc/lBaJIUo1r9mMEUc2Xc/CUGKwMKtTEgLJsOI21rOplDUp45m6UGrIBj7Ou/qIEzrg5Hl7BaZCvAhcBJzceYEIIK7uiz1dx4ZUOi6qzVaVV9ZYHdgwL1eVdwmV/OokpREjcCp2AyPXeWKWQ27LR1UYvgV+L5CQzuZ2E/10YW71VMQuStiIT6whIbHjS+GS0Q24fikTK+DPlH/nRByIcKly+wbVegCXS5ulMn6RvpxHJjNJy5bVxvKI9HTHNQfuDdoPmu23onQ3mOT7h6JbOUrPrhUYKvtocYg8JUYSquz9f3Y2f7cBbGu+1SDIXVSnmtBVic3gJe6v18I3nd+Af7+xuJMba1jW5B9pH11G2aRrucXSKn167QX+L4WVm+jBYV+W4v+2D4NxgN8At2JlKkxGJYzjLIE+zSW5SFesJpQZK6dG3T+WoPT9++t3R64z4DJUaUhmdJ2TcNmzE2qV+KdMQNdQVSO+nymNPPzUs= X-Forefront-Antispam-Report: CIP:216.228.112.36;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid05.nvidia.com;CAT:NONE;SFS:(4636009)(346002)(396003)(39860400002)(136003)(376002)(46966006)(36840700001)(8676002)(186003)(82310400003)(6666004)(7696005)(36756003)(47076005)(70206006)(7416002)(8936002)(4326008)(426003)(2906002)(82740400003)(5660300002)(36906005)(356005)(86362001)(316002)(2616005)(36860700001)(336012)(1076003)(110136005)(7636003)(478600001)(26005)(54906003)(70586007)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:31.7283 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9ec9e46b-e366-4cf8-f6b3-08d96c2c7941 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.36];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB0091 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org A VMID is generated in an IOMMU driver, being called from this ->attach_group() callback. So call ->get_vmid() right after it creates a new VMID, and call ->set_vmid() before it, to let it reuse the same VMID. Signed-off-by: Nicolin Chen --- drivers/vfio/vfio.c | 12 ++++++++++++ include/linux/vfio.h | 2 ++ 2 files changed, 14 insertions(+) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index c17b25c127a2..8b7442deca93 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1080,9 +1080,21 @@ static int __vfio_container_attach_groups(struct vfio_container *container, int ret = -ENODEV; list_for_each_entry(group, &container->group_list, container_next) { + if (driver->ops->set_vmid && container->vmid != VFIO_IOMMU_VMID_INVALID) { + ret = driver->ops->set_vmid(data, container->vmid); + if (ret) + goto unwind; + } + ret = driver->ops->attach_group(data, group->iommu_group); if (ret) goto unwind; + + if (driver->ops->get_vmid && container->vmid == VFIO_IOMMU_VMID_INVALID) { + ret = driver->ops->get_vmid(data, &container->vmid); + if (ret) + goto unwind; + } } return ret; diff --git a/include/linux/vfio.h b/include/linux/vfio.h index b53a9557884a..b43e7cbef4ab 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -126,6 +126,8 @@ struct vfio_iommu_driver_ops { struct iommu_group *group); void (*notify)(void *iommu_data, enum vfio_iommu_notify_type event); + int (*set_vmid)(void *iommu_data, u32 vmid); + int (*get_vmid)(void *iommu_data, u32 *vmid); }; extern int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops); From patchwork Tue Aug 31 02:59:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46CB0C19F35 for ; Tue, 31 Aug 2021 03:07:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 340BD6101C for ; Tue, 31 Aug 2021 03:07:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239560AbhHaDIh (ORCPT ); Mon, 30 Aug 2021 23:08:37 -0400 Received: from mail-bn7nam10on2079.outbound.protection.outlook.com ([40.107.92.79]:34481 "EHLO NAM10-BN7-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232008AbhHaDI2 (ORCPT ); Mon, 30 Aug 2021 23:08:28 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Xe25dmT4CjzM7lkf3zGv6Pbl9uPr5GUHQgXLrgIdn1qX0Zkiji919TroAUMhWcmQn5t2gkPqxgdG4GBWIt0FD8Pj1jTbV8qQjs2X/h0Pk9XN2f73JZe94Sz4CwQDaA/BilwiW0QWcwhvmyi6O7ept3NcMU/04sEUYPLhauGQYHUs4Ng2XUZ4QcuejFuANU3/Gbs4SH8ALwvcLg8V6CiYTs1vTVH86BU6ssJlRbzg2JnlKNBh4vflwhZPTvn62ONau9knwltdd1lCs/cVCwtnqG3rvmmZJziSZcA0npn/nqseLb9GUoHQYs8Gh9AaVvQxHEQaQ4xLuBRydf47QWSx+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LmceKKA4DlR8NrfAruAbqTlebZGGcOdEDzUTZhSWLDQ=; b=N/7kPwO7FTFckm4RD7Mws+/hmF+1GcHHkuCe5gpzfOShkDhq7CwF1uyWc2qZfT3YQZBH8E/CIZhoumv1G5DxnFu6desTjl3MP9JYyrDvbssUYgdT+hng9EKk7cmmIcbenrCAOOIxK0sQmFA1mCajXzUmTVJmCyxIqwdyTf2pl+RwxNrCFeKitILO/5zn/VYUixsHcH7IEJWtOan8pxYaoCqD1yHdaKh8co+w2/X2sEhPT/Ac8N8W/Pqt1RQbEWyNhTSEUgfy8AbCnaX8aBFdMzmhZM/iHXbU7WLCCW/m2q/iaMszjPXWmqwbTl/2DcaBV7lIEL0fV9nkVYBmg1E28A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LmceKKA4DlR8NrfAruAbqTlebZGGcOdEDzUTZhSWLDQ=; b=kbsZczzEJk6CF10p8AhPreAIxw+kcLEIttfb8VjAU9sCfWFAQ3kwlYZdvieRrn0XO16ArFJ3dkuwSICDMGCYRdfLnnJk7HBJy5jLviMXPSU9uL+LsridI1w2Q8PKFkzF2OyAdEL6fVuiJJISlyBegDyduN6QiWLDIdRz4b7MK1T66HY6MoRyJ3ADsZphtVcWrlQf8YNdwatGNt3mvLE36dLUqb5fLEEVVtkBKB6xYoLQMHr/vjvjvBYfn7lGxD4nOh6QwhfUgLDg4Rnt0DVFVBNgMBVhbVfNdqonhbrGyMh7Ugwb6ROBZkW4R8BeqRzK9xYwwwSXRSGguHFsGZIAbQ== Received: from BN6PR16CA0041.namprd16.prod.outlook.com (2603:10b6:405:14::27) by SN1PR12MB2351.namprd12.prod.outlook.com (2603:10b6:802:2b::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.18; Tue, 31 Aug 2021 03:07:32 +0000 Received: from BN8NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:405:14:cafe::70) by BN6PR16CA0041.outlook.office365.com (2603:10b6:405:14::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by BN8NAM11FT026.mail.protection.outlook.com (10.13.177.51) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:32 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 30 Aug 2021 20:07:31 -0700 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:30 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 05/13] vfio/type1: Implement set_vmid and get_vmid Date: Mon, 30 Aug 2021 19:59:15 -0700 Message-ID: <20210831025923.15812-6-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e3c16393-82d7-410d-6336-08d96c2c79a1 X-MS-TrafficTypeDiagnostic: SN1PR12MB2351: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:41; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 3nTY4XKcbzTUX7bD78bfAB79ouiZVY4k+E0QjrhbKG9eYSqgibYQVqW/zbT50jxPKlwPrpk/kA2/dWbYW0Xa0nYkNulW8iPtZBnUpq6+HJX0viDP452KpsC+xiG07REmWRwogMBirlnaHR6TbkQoNODsFQQabUNoo67jrBY8/IlMcf5cfgYmYUfQJDuaLAfwhauUuLRf3vhGw594J4T9/eAXSfPsVvpAPJcBT4nFh2o2RU3jCIRABlMs6VUpsrQQ9s3E+UaHkeH+zLgaHUnAc9MFMDU9UdxdYFiCcKcrWEp433f2Eutqwmrn9TImJS5Rd1+u2iGYD7Evd6U2v6bjTiW5m4tRP8qC/cmBj4mDL1p8TiXHk3YcKM+S4HEhvABY/+yeu69rIFqtMjON7toF/JMMJIAqXGmBLbcwerLv82sAiF3M8UpWm7IMUU9Z9gV5TRL5OoJuL/78zFxYacwBQs/fjQ4si4+G/WF4eSP9DFnMZh29jnvC6eNMT3Xk0taqADsz0ewEIfhvPB+XvLHEh7cv05PNjewO9VrOyOHdJrUtn0z10KoNZyb3S0nmXLgt6YADZ5j3dkjbUg1OolbwLxryJA91O2eOPzddUr4fQvuM0y1T+dGEVm8mfD8LXhOAB4sNmGNQLKOYEMsRduRsybkP7nNhzG8imaB9pSu/C3+udjzQsvAb9Y2aypLZLjEjIY1ciztJRd6egWx2r0Nhgq64Dnr3S0lDBHdz91oTF/Y= X-Forefront-Antispam-Report: CIP:216.228.112.32;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid01.nvidia.com;CAT:NONE;SFS:(4636009)(36840700001)(46966006)(36860700001)(36756003)(6666004)(70586007)(82310400003)(70206006)(426003)(83380400001)(47076005)(8936002)(8676002)(336012)(2906002)(26005)(86362001)(7696005)(4326008)(110136005)(316002)(54906003)(1076003)(5660300002)(7636003)(2616005)(508600001)(7416002)(186003)(356005)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:32.2987 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e3c16393-82d7-410d-6336-08d96c2c79a1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.32];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB2351 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now we have a pair of ->set_vmid() and ->get_vmid() function pointers. This patch implements them, to exchange VMID value between vfio container and vfio_iommu_type1. Signed-off-by: Nicolin Chen --- drivers/vfio/vfio_iommu_type1.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 0e9217687f5c..bb5d949bc1af 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -74,6 +74,7 @@ struct vfio_iommu { uint64_t pgsize_bitmap; uint64_t num_non_pinned_groups; wait_queue_head_t vaddr_wait; + uint32_t vmid; bool v2; bool nesting; bool dirty_page_tracking; @@ -2674,6 +2675,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) iommu->dma_list = RB_ROOT; iommu->dma_avail = dma_entry_limit; iommu->container_open = true; + iommu->vmid = VFIO_IOMMU_VMID_INVALID; mutex_init(&iommu->lock); BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); init_waitqueue_head(&iommu->vaddr_wait); @@ -3255,6 +3257,27 @@ static void vfio_iommu_type1_notify(void *iommu_data, wake_up_all(&iommu->vaddr_wait); } +static int vfio_iommu_type1_get_vmid(void *iommu_data, u32 *vmid) +{ + struct vfio_iommu *iommu = iommu_data; + + *vmid = iommu->vmid; + + return 0; +} + +static int vfio_iommu_type1_set_vmid(void *iommu_data, u32 vmid) +{ + struct vfio_iommu *iommu = iommu_data; + + if (vmid == VFIO_IOMMU_VMID_INVALID) + return -EINVAL; + + iommu->vmid = vmid; + + return 0; +} + static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = { .name = "vfio-iommu-type1", .owner = THIS_MODULE, @@ -3270,6 +3293,8 @@ static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = { .dma_rw = vfio_iommu_type1_dma_rw, .group_iommu_domain = vfio_iommu_type1_group_iommu_domain, .notify = vfio_iommu_type1_notify, + .set_vmid = vfio_iommu_type1_set_vmid, + .get_vmid = vfio_iommu_type1_get_vmid, }; static int __init vfio_iommu_type1_init(void) From patchwork Tue Aug 31 02:59:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2F58C432BE for ; Tue, 31 Aug 2021 03:08:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CEB656101C for ; Tue, 31 Aug 2021 03:08:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239755AbhHaDIx (ORCPT ); Mon, 30 Aug 2021 23:08:53 -0400 Received: from mail-co1nam11on2063.outbound.protection.outlook.com ([40.107.220.63]:50944 "EHLO NAM11-CO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239470AbhHaDIa (ORCPT ); Mon, 30 Aug 2021 23:08:30 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lRAjFHtIiLAtceqKY9vQeR1QXUQikKsuG6cI3yM2QDna/TJOKDVPK6276DVpgyzdNg/UxfAizyJ5HONwqVgHDfaZvT2bUxAE8w/lUGB6q4hJF3+8QkkxGDnIHlG+aiGeZ5Ow/BSv91QmQRblvOmRRRBR3QFk8HstJCVqHCDZUmDote9MAyrSScbsrq6jJhaAjSpC9L8EsIo+sXsG02tlizT14BlmCTxCwvPzrO2Om1qSwl1pClhukc4QtYf5JNaSLKGp7bQZmPkUahnQrwX3otPWsLtgsnwyYP8qHGGkwxJSI//KMfXT3v4izeAuHhxpZ+yLR6mZdOt96KLMdYBRWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rsd/aOCmTv7NJzSefqxt+kl1bPhMXkxHqLxaSkyIUSs=; b=REcGsND0JMzWrcMZtNLNSVUl962U6NyBsP6AOAWjlCwDXQrRseOQHusR+BvRXSOkoyFEiHEj0SKjR/qiv6tN6ro2pa9WpteEmc0Hwu8A3AVrEe6CXk1nRZkC6ISrhojPf5ifhYRhB96WjG87GQw9SAviWpC8r9Pa3qfC6a4riQ161gA7Xuxx56p+91D+bWG2ZxX0XkkmbiXM+SJD3ztS2/Amn3QTeTjgQ6oz5wFlvCP9V/IqESRipbsucaWIPPcT2JXmf5YXyE46da6yr0NFST3qW5KpJUbK6aocyNwfK5nJ+XP0lKpfX+IbLHS1SBfnK+oVaZwcvLLzvEUoIJsABw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=8bytes.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rsd/aOCmTv7NJzSefqxt+kl1bPhMXkxHqLxaSkyIUSs=; b=KVJvE/3WDPj9BPKyCpU3oUWdCSniSGKFFlDd6PtCfFkl2w0p43aqqBflXNB+aaPfQD/g+F1PUrhkSXd4EQpC+XH41FRWOFt/0/1pBfi9Z4KJy4u+9pr3dE7noFEfrCRVIdrEwg4S773rOkfT82Ucwiz5pl/tYnEaXPUXJb5KaQw+IBBMOgovbXSuTEByRVu3dIEW6cT1wd6lsS2oc7skXubggc3kKDqS53dW5/k6Myh8WL1xYJYH6c1tS0Ip17KV/GjiSbEBTkdV77L+c7/u3FQ9tFHYlJFjrWioI+f/oFCPk/7gDoig32he+8+nfa5lKEBY0Ul5H5Fe6EoTwMn2mQ== Received: from BN9P222CA0015.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::20) by CH2PR12MB4152.namprd12.prod.outlook.com (2603:10b6:610:a7::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.18; Tue, 31 Aug 2021 03:07:34 +0000 Received: from BN8NAM11FT025.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10c:cafe::11) by BN9P222CA0015.outlook.office365.com (2603:10b6:408:10c::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.20 via Frontend Transport; Tue, 31 Aug 2021 03:07:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; 8bytes.org; dkim=none (message not signed) header.d=none;8bytes.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT025.mail.protection.outlook.com (10.13.177.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:33 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:31 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:31 +0000 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 06/13] vfio/type1: Set/get VMID to/from iommu driver Date: Mon, 30 Aug 2021 19:59:16 -0700 Message-ID: <20210831025923.15812-7-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a09877aa-f991-4d6e-689f-08d96c2c7a8c X-MS-TrafficTypeDiagnostic: CH2PR12MB4152: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1227; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: losxrnb1ekjzvMX1p/nQBRzyaVtmXHCz6HJYGXufWjDWyAXYZjyLZ/I9Qz74auwBiYobFq4+xfJEQ9FgGiFupz8mYcu5EIJYEYF094waFOzVDeQRtVWCa1e5hpc7Cp6Kqx+4J0QYgljAh/laruVqdMbKdqynwuemZLfiGXVnxMptGdMdE4KMJzHpFNsc5V+DaTIKH1vafacqZNcS7mq9H0squDtY/WSj6FkJkIy6CzX35jn/YLUUhPb5GXbEA0Jh1zS0274J38TNzkeuyIm3GgCc/gIWw6vVRNRYt3nh3NpdXp+0G0GGfQQnaWTIW9LIP09OxIcFUbWQWg/yPfIsfFYT2ML6jQO036I3mINMI09uk0J04t48J4gBMdUNUjemX2fkj7wfcpQNUvY9XWSvXbv5kf0AaTDkFvWbXnx16OmnaJn/AuYqFd/TLV7WO2rrknlsqUTwE39O+KULMFNj5Ewo7gKmtt7xwZvKtJRxhEs68Amyq3M9diggQue/LQ+hLonhSlMxqnVpw4gBMxUBk4pBQyQXwqIvMGe3IbvN3VytF/X7Yk033796DObUplkcJ3C3MR0Oa9zBUNO6DxXab203IlNJUMt1QzUjT+Ej4Qzae0ReUbhSyKBkoQE+ksGwiMUF3CIZ5W2KxEDbkF06urvHzltCFUA69mKMMOiVG/0yv+D/1htBW00uAan6RGa/3Gflh82PbpRLtNbj2UWL7odJ7UwdGhqWgKV/VPthhsM= X-Forefront-Antispam-Report: CIP:216.228.112.35;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid04.nvidia.com;CAT:NONE;SFS:(4636009)(346002)(39860400002)(396003)(376002)(136003)(36840700001)(46966006)(5660300002)(2616005)(82310400003)(7416002)(26005)(6666004)(186003)(336012)(1076003)(426003)(8936002)(8676002)(478600001)(7636003)(82740400003)(70206006)(36906005)(356005)(36756003)(110136005)(36860700001)(54906003)(2906002)(316002)(4326008)(47076005)(86362001)(7696005)(70586007)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:33.8349 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a09877aa-f991-4d6e-689f-08d96c2c7a8c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.35];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT025.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4152 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds a pair of callbacks of iommu_set_nesting_vmid() and iommu_get_nesting_vmid() to exchange VMID with the IOMMU core (then an IOMMU driver). As a VMID is generated in an IOMMU driver, which is called from the vfio_iommu_attach_group() function call, add iommu_get_nesting_vmid right after it creates a VMID and add iommu_set_nesting_vmid before it to let IOMMU driver reuse it. Signed-off-by: Nicolin Chen --- drivers/vfio/vfio_iommu_type1.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index bb5d949bc1af..9e72d74dedcd 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2322,12 +2322,24 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, ret = iommu_enable_nesting(domain->domain); if (ret) goto out_domain; + + if (iommu->vmid != VFIO_IOMMU_VMID_INVALID) { + ret = iommu_set_nesting_vmid(domain->domain, iommu->vmid); + if (ret) + goto out_domain; + } } ret = vfio_iommu_attach_group(domain, group); if (ret) goto out_domain; + if (iommu->nesting && iommu->vmid == VFIO_IOMMU_VMID_INVALID) { + ret = iommu_get_nesting_vmid(domain->domain, &iommu->vmid); + if (ret) + goto out_domain; + } + /* Get aperture info */ geo = &domain->domain->geometry; if (vfio_iommu_aper_conflict(iommu, geo->aperture_start, From patchwork Tue Aug 31 02:59:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 761ADC25AEB for ; Tue, 31 Aug 2021 03:07:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5969A61004 for ; Tue, 31 Aug 2021 03:07:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239610AbhHaDIl (ORCPT ); Mon, 30 Aug 2021 23:08:41 -0400 Received: from mail-co1nam11on2049.outbound.protection.outlook.com ([40.107.220.49]:46177 "EHLO NAM11-CO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S235848AbhHaDI2 (ORCPT ); Mon, 30 Aug 2021 23:08:28 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gXQY2/c8UMoTWvjadwUZizDdwcwOOlTs16EJmI/1Uoy/R6BCys/qBWZdji2qYL/GJ9chN1kvlbIzACv8kYSob3DMV/IGft61fLWdrcCmRmY8+viXkntrw3AHs4BX8x6xVA5y1CagaTxBpbLZicTXOTaC/dEACvp6Tvj6a3PJcoa253nilBrDLeeG8sSQRMq4lgeuUzor9q+qLThtkJMS97eWeBs5akCuUVles0YyszH7eIy9Yajm0tBKJdphswfb6V6sJhj9sdekILE6pP2HSKC3agrP76GxOujItSNras90aW3VXa9U3LZNtb81LW8nkCXIaS5T4CjNbrJIKJC/hQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fDMbfms/MBqimrI6f6S49Ejrs+ZoatK/kDmg/MWh+Og=; b=jINwnZCt5l/WXhOFgKx7e3u9V1mFGjnajdBYDP4qAzFuGUlItsH8yDPy5r6jBSOgQp/3VveUsjHsdxy8kIHMdtF56C661DgRsCbAmI/yhGeneTzMQolxkbkkCNcjprxgAoIxzWZfy9UORE1yNZNav89Svbb0sGwqLuzmSPVLx1mJqLLIWQRDW2EFc8bRPgkRNeBZ6HWXIUVEifWcWS3HGcJ7KFNm7OM+7bYFihtJizlKX3BRUslOXrCm79Xk7NyyU5KpzqYFb1hFSrhdEE1ox+KKpFWI0dXTpXzG48ELJeCEPDRWYRqfh1yKK0eYKWRRbNZXIdHGpJTgWaW8FtfwyA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fDMbfms/MBqimrI6f6S49Ejrs+ZoatK/kDmg/MWh+Og=; b=dDAf/HoCD4Tx/T4i9EmTIlL+USUM0WWtVSTEZCxaH8VJA5sszJWK4TMPON1NPAZ/U1pCJF7MlPfo6aUecJ+XBwdmOdmr3/NOGZ+txO+EfMvXTx1YlcoHf2jsHCaBFN3sNLCQmpjMkZMCcudkznzHXVL55hcp1KJyyOqw8IIG31wMIhi3AZyKerUe4Q9riZbSeiTxQXz7haRotF4DUDOKssMqQ7PRu36uTowGlKNCQ6MFvaqEMtaKLeujn4/WQvoNJ/jVkjWKjDhtyrCDUAk7zkt/wRVjvozGeZ90SMs6iFskNAgkzM5QpxGQbO3Rak4eBTZYOCICAZfYecBKzMNwMQ== Received: from MWHPR07CA0007.namprd07.prod.outlook.com (2603:10b6:300:116::17) by BY5PR12MB5509.namprd12.prod.outlook.com (2603:10b6:a03:1d7::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.19; Tue, 31 Aug 2021 03:07:32 +0000 Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:300:116:cafe::b1) by MWHPR07CA0007.outlook.office365.com (2603:10b6:300:116::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.18 via Frontend Transport; Tue, 31 Aug 2021 03:07:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:32 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:31 +0000 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 07/13] iommu/arm-smmu-v3: Add shared VMID support for NESTING Date: Mon, 30 Aug 2021 19:59:17 -0700 Message-ID: <20210831025923.15812-8-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 525f8d83-ce64-4678-a839-08d96c2c79bb X-MS-TrafficTypeDiagnostic: BY5PR12MB5509: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: oNJ9XZVNTEu5rbzPzu15BUtktXAgjYh9KB/v/PC1eV9bUsrGSCU4A2nTODwHL1Jr5/hSDozqJWCJD5IqAODvDZwW1spF+s3MUyG9D7aevu59zdy1DVr/H31PpGBONbwAacKGGkJAGqnpk4jnobPHPsR6gS2r27yOwUZ4KjGjoiGSxZug4YPLxvDQMAifpDqp5jCAHx9yQVjrkTEz3N8DKm7jLR0Wr3fPXn+Mmq7gW1DJoKzazEXqLF6doM79dq2XygFRc46UZcsSjjg6lqQVUh9IVa8fDCoVyPDv/uP2eoSkpneHp8QFa9GWSq/baMvq6GYGob5Fb5rpFAQBKiWDKi9UhyAFDkOM6o3HfX3E2d62uCPv0Co7gLaFSfC/fMVwIIu9X4szjHnAWN26/FJyf7A+AGnvJGzL4JFPckCYJxJR5mSqraumgApF3UcVTbkULWJdC2KKxh+0+CmB34nLCgeQcxwzWwjPQjtGzQA0VWhRfd5olHWqPIf/3jNDIEzepxhzbD4G+5mwhX8c2FIPtuYfJ/2wZjIp4hv8q3rt193fldTpYICt7V+CsuhqQ9FpvRkK3C4pDxh+gRNohnhc1OgQWuzBtNI1sRVqMeoPgOj6BiJKRLeI2y8K6DxEsrhW5pfTk4g8OdaGm4LRzTasZEBuyaKHMLOnphA4nNKDFJRbBbv7Dh96YIDwt1RxGgHLR5lUBgnbTR9l10ZbDH+KeHoNfDEx6Lh1ylOPZqDYfUM= X-Forefront-Antispam-Report: CIP:216.228.112.36;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid05.nvidia.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(36840700001)(46966006)(1076003)(186003)(36906005)(70206006)(36860700001)(8936002)(2906002)(2616005)(426003)(356005)(4326008)(336012)(26005)(6666004)(8676002)(82310400003)(47076005)(478600001)(86362001)(7416002)(83380400001)(316002)(36756003)(110136005)(5660300002)(7636003)(54906003)(82740400003)(70586007)(7696005)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:32.5218 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 525f8d83-ce64-4678-a839-08d96c2c79bb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.36];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB5509 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org A VMID can be shared among iommu domains being attached to the same Virtual Machine in order to improve utilization of TLB cache. This patch implements ->set_nesting_vmid() and ->get_nesting_vmid() to set/get s2_cfg->vmid for nesting cases, and then changes to reuse the VMID. Signed-off-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 65 +++++++++++++++++++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 + 2 files changed, 60 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index a388e318f86e..c0ae117711fa 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2051,7 +2051,7 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) mutex_unlock(&arm_smmu_asid_lock); } else { struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; - if (cfg->vmid) + if (cfg->vmid && !atomic_dec_return(&smmu->vmid_refcnts[cfg->vmid])) arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid); } @@ -2121,17 +2121,28 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain, struct arm_smmu_master *master, struct io_pgtable_cfg *pgtbl_cfg) { - int vmid; struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr; - vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits); - if (vmid < 0) - return vmid; + /* + * For a nested case where there are multiple passthrough devices to a + * VM, they share a commond VMID, allocated when the first passthrough + * device is attached to the VM. So the cfg->vmid might be already set + * in arm_smmu_set_nesting_vmid(), reported from the hypervisor. In this + * case, simply reuse the shared VMID and increase its refcount. + */ + if (!cfg->vmid) { + int vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits); + + if (vmid < 0) + return vmid; + cfg->vmid = (u16)vmid; + } + + atomic_inc(&smmu->vmid_refcnts[cfg->vmid]); vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr; - cfg->vmid = (u16)vmid; cfg->vttbr = pgtbl_cfg->arm_lpae_s2_cfg.vttbr; cfg->vtcr = FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) | FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) | @@ -2731,6 +2742,44 @@ static int arm_smmu_enable_nesting(struct iommu_domain *domain) return ret; } +static int arm_smmu_set_nesting_vmid(struct iommu_domain *domain, u32 vmid) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_s2_cfg *s2_cfg = &smmu_domain->s2_cfg; + int ret = 0; + + if (vmid == IOMMU_VMID_INVALID) + return -EINVAL; + + mutex_lock(&smmu_domain->init_mutex); + if (smmu_domain->smmu || smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + ret = -EPERM; + else + s2_cfg->vmid = vmid; + mutex_unlock(&smmu_domain->init_mutex); + + return ret; +} + +static int arm_smmu_get_nesting_vmid(struct iommu_domain *domain, u32 *vmid) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_s2_cfg *s2_cfg = &smmu_domain->s2_cfg; + int ret = 0; + + if (!vmid) + return -EINVAL; + + mutex_lock(&smmu_domain->init_mutex); + if (smmu_domain->smmu || smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + ret = -EPERM; + else + *vmid = s2_cfg->vmid; + mutex_unlock(&smmu_domain->init_mutex); + + return ret; +} + static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) { return iommu_fwspec_add_ids(dev, args->args, 1); @@ -2845,6 +2894,8 @@ static struct iommu_ops arm_smmu_ops = { .release_device = arm_smmu_release_device, .device_group = arm_smmu_device_group, .enable_nesting = arm_smmu_enable_nesting, + .set_nesting_vmid = arm_smmu_set_nesting_vmid, + .get_nesting_vmid = arm_smmu_get_nesting_vmid, .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, @@ -3530,6 +3581,8 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) /* ASID/VMID sizes */ smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8; smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8; + smmu->vmid_refcnts = devm_kcalloc(smmu->dev, 1 << smmu->vmid_bits, + sizeof(*smmu->vmid_refcnts), GFP_KERNEL); /* IDR1 */ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 4cb136f07914..ea2c61d52df8 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -664,6 +664,7 @@ struct arm_smmu_device { #define ARM_SMMU_MAX_VMIDS (1 << 16) unsigned int vmid_bits; DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS); + atomic_t *vmid_refcnts; unsigned int ssid_bits; unsigned int sid_bits; From patchwork Tue Aug 31 02:59:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 989D8C41537 for ; Tue, 31 Aug 2021 03:08:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7EB966101C for ; Tue, 31 Aug 2021 03:08:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239544AbhHaDJo (ORCPT ); Mon, 30 Aug 2021 23:09:44 -0400 Received: from mail-bn8nam12on2077.outbound.protection.outlook.com ([40.107.237.77]:35585 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239451AbhHaDI3 (ORCPT ); Mon, 30 Aug 2021 23:08:29 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eikqjWxYEk4pWud+eeUCbNaGi/9tKxekex/EuxIKAecTHKMQZ0U8lNbEuKx6iK+Uo0SSViGPctZCe/nTrS9mFMMdlcLMyF0e5uF5fGWt3rpJY98mm5rYW4VZDea2P9tU2gssgVbXqC5xFEI4C5lJN7ytXCWYHoWSQTJQkAJQJ2B/L6IXtd4gFIOsh0BNaStO6ra5ohji7eHjewLqtTmggFhJMs2r3q7+JQ97vhNnTDJKpXwRZWJmSW9MT7pUa/FMEm/MlmNOXo9uzDmUh7809/sbFhtBeOJsTg6LZ7ODyc0XlumRL9cH/bY/4kJk2i4QA9FhJNvVHcZ4/OHotYZWWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cbDG0zI6j2pw6aVQqaU7ogclbWFVfc3P5oXdSAnOhAA=; b=O0bpK/0NNlBgO8XlkP+Hem9LgKWfxtEsm6bJPFiU8KbyDTKOswBeHg0bgxmeXFfwHl8SrMyVqDgMMFMc8twgbERTZPzAgdlVe0fRpcqy0x5QTtVpvrLIfFsYqb0/+HhwdDwifl7sf7KpETlSSdKuAQdt5DEP7N8ChCbaFsneu2MA1+G5ZxWa9Z21LcGIYiDsnlc+rOj/3HJwScBkrR1RqfkR+zvBfTPiz8lIGG2oh3XRX/MgV1e2YQfJDZ08exJwlqJ4t8T4Tqvr/dNOwinUsgiJteI9C06NdrzhaT7R8C+KTb/w9Mx/YZBV1TlJzFQa+cSg0taG2FHHpxkWUZYeug== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cbDG0zI6j2pw6aVQqaU7ogclbWFVfc3P5oXdSAnOhAA=; b=kgLOptub1TVUiXPHT4Ge+7siZlimlD0awScMvc/4XzsAMMU/6GkbtfSt+bM8uAuDl2p1jjVWIrwc0Vpj1ZSFZN+PhGXlAYJ9QlqOkXA4z92jUJkzJHUc1S0t7DrK30wVCQQe2VkgECGLg5xnWnYYiQ6FgWQ6f/ZbrZsfk9SsdxlamJYYzKniy3yiyA0TZA9pcYSl/5Mw17AIrXYW/8pZIYDlkHPpYbieARk029drpUt10sGjkZ5zJybl5NLQHIFCxiD7zyeRbGhyz5vigOoDj4typfVQipoU+/HeU/MEwwHnLOohpK3yEAY+sYNaDYtJf0tUjkH67NIMPDcVMGVrvA== Received: from BN6PR16CA0033.namprd16.prod.outlook.com (2603:10b6:405:14::19) by DM5PR1201MB0235.namprd12.prod.outlook.com (2603:10b6:4:4f::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.17; Tue, 31 Aug 2021 03:07:33 +0000 Received: from BN8NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:405:14:cafe::1c) by BN6PR16CA0033.outlook.office365.com (2603:10b6:405:14::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.19 via Frontend Transport; Tue, 31 Aug 2021 03:07:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by BN8NAM11FT026.mail.protection.outlook.com (10.13.177.51) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:32 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 30 Aug 2021 20:07:31 -0700 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 08/13] iommu/arm-smmu-v3: Add VMID alloc/free helpers Date: Mon, 30 Aug 2021 19:59:18 -0700 Message-ID: <20210831025923.15812-9-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5ee3b801-5141-4d08-3af7-08d96c2c79f0 X-MS-TrafficTypeDiagnostic: DM5PR1201MB0235: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: HV9wxzwE5Y9MOelAVlk/7KpB1KkxnNllpsqTIyPnzWK4q3hVp/db0TYlyWk9d/9Pa+eGZaGwqgI+9OZaPH0erLu0X8fhVjtnRtDkgZC2iJGpBGHj6KoK8NCqVH/7zTKy4qHdZxScUf2ykAW4vf2bc39JwZJBMrdZgG+gyA6CnPBy/Dr2Q4i8JDBpmN1QyXf5OWCBDlTt7Ck4uklbaGC1ivGVhiadgxvz7hB9GyxOB3E2V8avrCQpn99gcTv7mvGVHYn+n3EfqnmPswE1OHXWptAxk8LCjFw5pbwp68YySbb8F/YwKppNrEG0CRo1bishJ1cpD7dIYk5E53xO4sIPk6VYBlwhGZsIuesRx1AI9qNkcO6mLU8j6Prx32NRarn3f21Lfd/rtEIakrbT9zGVAJhlwpZMHvBSiYqtvYA+3zpki0ZEh8QOpJKfZjFsTFMRj/wWyUbQdMg0dWsRIhKWYuyKfA+gj53gLxbeBdOezDjBFK1pWqGaUpW8UYYHtghyF15u4Xm5iFCW+YX1u2Xop5jMyYHBharOqhHD6D3/IguqUbRi0h7LD82u12eLQeACc1/pHeixcLKh2BujPXTGYxbR7Xf2tsh0TgBXMbS9IZmNAbVwLdxku3lTpyA1g+f5tGb/SB2kMnxpekeIMAB8fzCNxcsjCjPO+Uy7furdBz9UholPShKOGXC8BAUPaXWozPPXv/O29guiRF9toE7Sf59fuatMLEER7axDuQbqnus= X-Forefront-Antispam-Report: CIP:216.228.112.32;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid01.nvidia.com;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(136003)(396003)(46966006)(36840700001)(8936002)(36860700001)(82740400003)(336012)(478600001)(82310400003)(47076005)(1076003)(26005)(7696005)(426003)(7416002)(4326008)(316002)(2616005)(5660300002)(7636003)(86362001)(8676002)(54906003)(110136005)(70206006)(2906002)(6666004)(36756003)(186003)(356005)(70586007)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:32.8094 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5ee3b801-5141-4d08-3af7-08d96c2c79f0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.32];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB0235 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org NVIDIA implementation needs to link its Virtual Interface to a VMID, before a device gets attached to the corresponding iommu domain. One way to ensure that is to allocate a VMID from impl side and to pass it down to virtual machine hypervisor so that later it can set it back to passthrough devices' iommu domains calling newly added arm_smmu_set/get_nesting_vmid() functions. This patch adds a pair of helpers to allocate and free VMID in the bitmap. Signed-off-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 ++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 3 +++ 2 files changed, 13 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index c0ae117711fa..497d55ec659b 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2032,6 +2032,16 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx) clear_bit(idx, map); } +int arm_smmu_vmid_alloc(struct arm_smmu_device *smmu) +{ + return arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits); +} + +void arm_smmu_vmid_free(struct arm_smmu_device *smmu, u16 vmid) +{ + arm_smmu_bitmap_free(smmu->vmid_map, vmid); +} + static void arm_smmu_domain_free(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index ea2c61d52df8..20463d17fd9f 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -749,6 +749,9 @@ bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd); int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid, unsigned long iova, size_t size); +int arm_smmu_vmid_alloc(struct arm_smmu_device *smmu); +void arm_smmu_vmid_free(struct arm_smmu_device *smmu, u16 vmid); + #ifdef CONFIG_ARM_SMMU_V3_SVA bool arm_smmu_sva_supported(struct arm_smmu_device *smmu); bool arm_smmu_master_sva_supported(struct arm_smmu_master *master); From patchwork Tue Aug 31 02:59:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A342FC43216 for ; Tue, 31 Aug 2021 03:08:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8AAAB60FED for ; Tue, 31 Aug 2021 03:08:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239726AbhHaDIt (ORCPT ); Mon, 30 Aug 2021 23:08:49 -0400 Received: from mail-mw2nam10on2086.outbound.protection.outlook.com ([40.107.94.86]:1313 "EHLO NAM10-MW2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239476AbhHaDIa (ORCPT ); Mon, 30 Aug 2021 23:08:30 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=byZuiMxhePIjzqu2ZS9Jc5R60mDSc466ZcLWLhTpDDvBs8EyFlQDVDts2zdqmRlo0ZQNL8nKnrq+Gf74BmKwdYX5nRMUmQiJdBFGQxunyD7GSZidwFYvdwzEaPdeXnMjy1DhyJthr+MrZMQHCcfI4ASttWOY52a8wm+q72QGFYcH1et6Y7xiOA7aMyGdIcZeK3qCIS/G2ERm8tx/JaMFf/f4feXo/0g1+yUPUTceosINdsp6trGeRdSaOEfIuR5O519i+SuQ97mhQwNI6q/oabb/YMPk6Q+naH47kRMq7hHgSyRHDszOlFnyFsaIyBmWgbZHBFpD1kFc7gp9ptaDHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=M3MA/nnJIPzUVRSO/l+hNQzhL1G87DEel6bj6v2O+WA=; b=khWeHUwPclBczAtm3zLhMPeRMnqi7E+N+JCBy594s7pRR/1S0DOTQ0c1+4ewm3ZnopqcPFic8ktJstk3iJJrJZ/6iNgepZAWEFG7v2bRvaNz9gbzlfUB0Zk6RfRbjZM/gH0DNYIvM/jmhFelV6KoVRBEefOO7e4TBiS1QZngSg0YgC6fHHGHAiNmUADjJMHeTTKIqzcYyvjAf/qGyUHgUw5OXhhjXhfBtfl0nyfHaBOSHl1gYwsC5Tj0AK6nTvll4uFgBZH4fnSRt9OVGwtfOYPA3CZ8NAS1vwHR2dDYBZQ2BzJ9fGGQvIjgXSRO8Zy0nWZgnuTKfu8uypU+cQ69Uw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=M3MA/nnJIPzUVRSO/l+hNQzhL1G87DEel6bj6v2O+WA=; b=Fm0Kg/MeCp4H059djrWX3oH5nRAbR/FH4ur8OBYASOZ2kPYqofsz0UCl/7TcwnU22FbAoRdK+lszVFv3K6nyVgCsUmxs7WmUKRUvDjhBYsmUmvPlnf82Hy6GRxayUKRWVnB6BNAusJwjBBz8J9hvfI26sPfFP1F6wgXcJ+j7hshQ+Lk/NKEmkLv0fgNWrTbm2HeR9d7a1mpKkRDawcoBldotxIwGPKhiAdUL8D0qT9ifIGeyKL+fn0hGQM5f52qvNmc3mdxOunFUiyjcmZb04FGW+wTnK8uCAbY4i6fvZzPoIml+f3CbOmuCXbIveEqYRIXgsHh8a5fqkk459w+0Pw== Received: from MWHPR07CA0022.namprd07.prod.outlook.com (2603:10b6:300:116::32) by DM5PR12MB1595.namprd12.prod.outlook.com (2603:10b6:4:3::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.23; Tue, 31 Aug 2021 03:07:33 +0000 Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:300:116:cafe::1f) by MWHPR07CA0022.outlook.office365.com (2603:10b6:300:116::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.20 via Frontend Transport; Tue, 31 Aug 2021 03:07:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:33 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:31 +0000 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 09/13] iommu/arm-smmu-v3: Pass dev pointer to arm_smmu_detach_dev Date: Mon, 30 Aug 2021 19:59:19 -0700 Message-ID: <20210831025923.15812-10-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1fb0a8dd-ad65-4ee9-5896-08d96c2c7a0a X-MS-TrafficTypeDiagnostic: DM5PR12MB1595: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:854; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LPfNiHWT4Omk8Vh9rBMlSkgMrnl4dbUIJQWOssst0nxkQG+Hbna4yTUEfw23iPApnG2NvIDdRPT05/E2UNTh+QedHInlexzeGe0gid56FE+tw3lUzPojQc83FAV7hsYR1giRPRMHPzQzM0NCJihasxDYbERWTvy4w+VvHQCUsYXpXPO/4HzDoFkXLcZlEnuLR3vIxaLXCBwmG735D5ryp4BM7FER7lYmjpPJFkalsgHaEv/4rXKIrNgi0tx90mJNbY0/m0DimUI9U3nJAKqUFKC1Rd9xQLUXrp6zzCHv7mJz7ZxO7Vn91DsZzb31Yx8jeAksqvo8B+/+qxBhrENYcpVGRIez8PrwsGNoaE8gmJAMzGlr39pTWHgo4lWj5r/EIZgV1EsqILCOXQECO/mcPTLLup7q6+D8+USpdWqK9ggg/8fS63uirtMt9r3T+toIhbX/+xJPlvCVKxLaev6o1pVs4axT/vSeXDqq0DS8Wbn5W0yKxLWg0NcRpNaf8eSYdSuIEcrcHaAM2DSOTl+o2Dd/fzyJ5E+dv59GO9jtxsa6ov8VloJa5LNk9Vv7KTKNy46/Er/gPDJzrm/3fd9QOxGoHqMRide4LFME9Lq5yTH4ME2x11xKbzXuZsu6MpCdjwE2Nz5B7A6Ne2EHVBsP6VjBoMN1YIV2LTGKb3VklWuu01g88qzhK5qaRqzm4+MyaAtV2SIoav0HnLzj6FLqzslDlih8+63DMpE5XV0eJxk= X-Forefront-Antispam-Report: CIP:216.228.112.36;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid05.nvidia.com;CAT:NONE;SFS:(4636009)(376002)(346002)(39860400002)(136003)(396003)(36840700001)(46966006)(70206006)(82740400003)(47076005)(1076003)(7636003)(5660300002)(86362001)(316002)(186003)(110136005)(36906005)(70586007)(356005)(83380400001)(26005)(478600001)(36756003)(36860700001)(8676002)(6666004)(2906002)(54906003)(426003)(82310400003)(336012)(8936002)(7696005)(7416002)(4326008)(2616005)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:33.0345 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1fb0a8dd-ad65-4ee9-5896-08d96c2c7a0a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.36];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1595 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We are adding NVIDIA implementation that will need a ->detach_dev() callback along with the dev pointer to grab client information. Signed-off-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 497d55ec659b..6878a83582b9 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2377,7 +2377,7 @@ static void arm_smmu_disable_pasid(struct arm_smmu_master *master) pci_disable_pasid(pdev); } -static void arm_smmu_detach_dev(struct arm_smmu_master *master) +static void arm_smmu_detach_dev(struct arm_smmu_master *master, struct device *dev) { unsigned long flags; struct arm_smmu_domain *smmu_domain = master->domain; @@ -2421,7 +2421,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) return -EBUSY; } - arm_smmu_detach_dev(master); + arm_smmu_detach_dev(master, dev); mutex_lock(&smmu_domain->init_mutex); @@ -2713,7 +2713,7 @@ static void arm_smmu_release_device(struct device *dev) master = dev_iommu_priv_get(dev); if (WARN_ON(arm_smmu_master_sva_enabled(master))) iopf_queue_remove_device(master->smmu->evtq.iopf, dev); - arm_smmu_detach_dev(master); + arm_smmu_detach_dev(master, dev); arm_smmu_disable_pasid(master); arm_smmu_remove_master(master); kfree(master); From patchwork Tue Aug 31 02:59:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33785C43216 for ; Tue, 31 Aug 2021 03:07:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1CAFC61027 for ; Tue, 31 Aug 2021 03:07:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239715AbhHaDIr (ORCPT ); Mon, 30 Aug 2021 23:08:47 -0400 Received: from mail-bn1nam07on2074.outbound.protection.outlook.com ([40.107.212.74]:17553 "EHLO NAM02-BN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239489AbhHaDIb (ORCPT ); Mon, 30 Aug 2021 23:08:31 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ch3+K2SKPbfhN0e2D5m45TaES2NSatir7sLM49bLvejpOMauhZvrjZI59E1cKshzxEWm9pf3ecT6xlY/6HinZg4dw4dhY9b6A+tHC72tVMZBtdAeQJYhH4XQ7xKSVz3qFUPkkPASuiP3MVs7AH5S6UqpLm2dqk9ve9SZLboRT+CfZ3pFdJZQTWUuDOc89gPg6hlrlTYxRaO2Wz5f3X30Uv9tl6KEIns80s2ScPmt0LNRTYDUNSbfcB+GPRSDZuVPuXFmHiTeD6CiQZCFYf5B0awBke/IjIbMsc3i6IXjHGW1fHHiP+8b6IBkRzolDq24Awbr2d0108G+BuwmpgQv+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tpjukUTieUUjed7zzEP+YhIb+Ch9mnM9hZjb/ckgsGY=; b=alxiWboigtim7RXbVllyIIHVCEWM3ulx5cmzspuywMIHhZ3JwDgdW2MHRxQHGJpRb6sijZlU8PPg/TXfUMhG7J9xZWupFE9bOR9s+nzun8P0fQGmbR/hoxs9qMS9OydnnPUtcmnJ7aUzIH/a08M4LkH/xQ6Jyer9mJ8Tk5zp9F1KCwa+v1bpMxAkS/YbGhH4XHYm610X+3zuODTf3gXDXNL1c+nidQObt7qbnT+y7UOMI3iCEpCtpwcSI82AFTvswslGjiwcX3yAqn3RL8F655Gls20G0QFrLGzSvdGNFngugRzdDwT4ochFsbXwAb7pJuqLhAW6ZOg4GlDe9jMJuw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=8bytes.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tpjukUTieUUjed7zzEP+YhIb+Ch9mnM9hZjb/ckgsGY=; b=qHVVyhf+BJjT+1b4k09qGJRKjjVrdWlwA8zUDonpoiBQhAgC/k+s6lW+Gg4hiYgpW/1/qlOG3TayykQJXHEASq7V1d81WW7ofov9vEU9ozZpAGI2x+iRRcu3FCNiy7y0BJgf8arUBHudKKRuppt8I3UjNj/wUzCjU0Gymb1FjfqvTsJVtZ+/BtKVTfD+rhIGGm7xEvK5a6Z1gbGLXzQK2vZG2F0/t71E4SpssuokxgfLmtu1TwiqW1ctSJF82eUeiipq5ybtm3UDcBKLORQ8gQ8i3GQaL/lYrmHsmC0sdmiXNHg6Rc3mS81wZVZyW4Tuya00ygZB+1KwrSfyM9EfPg== Received: from BN6PR16CA0040.namprd16.prod.outlook.com (2603:10b6:405:14::26) by SJ0PR12MB5440.namprd12.prod.outlook.com (2603:10b6:a03:3ac::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.20; Tue, 31 Aug 2021 03:07:34 +0000 Received: from BN8NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:405:14:cafe::6c) by BN6PR16CA0040.outlook.office365.com (2603:10b6:405:14::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.20 via Frontend Transport; Tue, 31 Aug 2021 03:07:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; 8bytes.org; dkim=none (message not signed) header.d=none;8bytes.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by BN8NAM11FT026.mail.protection.outlook.com (10.13.177.51) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:33 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 30 Aug 2021 20:07:32 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:32 +0000 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:31 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 10/13] iommu/arm-smmu-v3: Pass cmdq pointer in arm_smmu_cmdq_issue_cmdlist() Date: Mon, 30 Aug 2021 19:59:20 -0700 Message-ID: <20210831025923.15812-11-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 72bcde5a-0017-4b47-e0d6-08d96c2c7a5d X-MS-TrafficTypeDiagnostic: SJ0PR12MB5440: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rPFeK2n98O6QAquFBBPxaAxcGJRSzyIAw+leQthnvTgz+K9LmHI5hvE7Ffk3lCQxreQRiUNYYfMoZnYGGZ9JBqGB5kIA6FESuwZyFPWfEKNtbaHQFIfRNYsaVzLk1JKPIFwB8EUSxkGkMFPQruzTVLi/3JOyIg/M4J2r0gC8oq2jgxVhYjdiTb3V6W/h1LRJkScXnEuI8V20ZAnCNi8ZotRsXCDm+qezOMLC5naMSKwkNP8kLtPUjtD+yfIDTwvbItoItfUjacCJroeqrbSz+QRnlaO3P97ssKvwHHjc5T0Ca7/OfumoH3UAklnoieN0HBbi34vHJM5XwzVSMLD17+srunvUu+gKxgzSZG7+An+S7WfNs+ccadEkV8oWTpQ7McT0DvQdouZzbSAe3PJBKlgdKTFp+LmOAPX08YnK+WE2QDj0vX2k/GU0zt6wQA4Y/5PL9Uio7zLp8eGrmxfR8ujQtHzhvFe6vABY0xA03yXzOfxt+YqDJBDKZAMFP7ecbn0YqdIaiSFsXpQV2rrG+8eEm2ky2colpBPlgkvePGopTR2WyDr6NZAJh84TSpqzEItDCvq1pqhzcMf/xO9O9H8168GbvVLpg9eS8WxbTpn1cpar6egMuOVY0/3vFbJY2rvJ44EQ4aNNmXdhuLMtfkdQUXEzAtR1luY23hWau0Q8+JMf2hwyLdVVU0W0lJl02Vqym8+7azlP6cr29q3Nl17uocISOFYLDgFCloCVOak= X-Forefront-Antispam-Report: CIP:216.228.112.32;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid01.nvidia.com;CAT:NONE;SFS:(4636009)(36840700001)(46966006)(426003)(186003)(2616005)(356005)(26005)(8676002)(36860700001)(70586007)(7696005)(8936002)(336012)(7416002)(70206006)(7636003)(1076003)(47076005)(83380400001)(82310400003)(2906002)(316002)(508600001)(4326008)(5660300002)(86362001)(110136005)(54906003)(36756003)(6666004)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:33.5390 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 72bcde5a-0017-4b47-e0d6-08d96c2c7a5d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.32];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB5440 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The driver currently calls arm_smmu_get_cmdq() helper internally in different places, though they are all actually called from the same source -- arm_smmu_cmdq_issue_cmdlist() function. This patch changes this to pass the cmdq pointer to these functions instead of calling arm_smmu_get_cmdq() every time. This also helps NVIDIA implementation, which maintains its own cmdq pointers and needs to redirect the cmdq pointer from arm_smmu->cmdq pointer to its own, upon scanning the illegal commands by checking the opcode of the cmdlist. Signed-off-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 6878a83582b9..216f3442aac4 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -584,11 +584,11 @@ static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq, /* Wait for the command queue to become non-full */ static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu, + struct arm_smmu_cmdq *cmdq, struct arm_smmu_ll_queue *llq) { unsigned long flags; struct arm_smmu_queue_poll qp; - struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu); int ret = 0; /* @@ -619,11 +619,11 @@ static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu, * Must be called with the cmdq lock held in some capacity. */ static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu, + struct arm_smmu_cmdq *cmdq, struct arm_smmu_ll_queue *llq) { int ret = 0; struct arm_smmu_queue_poll qp; - struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu); u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod)); queue_poll_init(smmu, &qp); @@ -643,10 +643,10 @@ static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu, * Must be called with the cmdq lock held in some capacity. */ static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu, + struct arm_smmu_cmdq *cmdq, struct arm_smmu_ll_queue *llq) { struct arm_smmu_queue_poll qp; - struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu); u32 prod = llq->prod; int ret = 0; @@ -693,12 +693,13 @@ static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu, } static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu, + struct arm_smmu_cmdq *cmdq, struct arm_smmu_ll_queue *llq) { if (smmu->options & ARM_SMMU_OPT_MSIPOLL) - return __arm_smmu_cmdq_poll_until_msi(smmu, llq); + return __arm_smmu_cmdq_poll_until_msi(smmu, cmdq, llq); - return __arm_smmu_cmdq_poll_until_consumed(smmu, llq); + return __arm_smmu_cmdq_poll_until_consumed(smmu, cmdq, llq); } static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds, @@ -755,7 +756,7 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu, while (!queue_has_space(&llq, n + sync)) { local_irq_restore(flags); - if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq)) + if (arm_smmu_cmdq_poll_until_not_full(smmu, cmdq, &llq)) dev_err_ratelimited(smmu->dev, "CMDQ timeout\n"); local_irq_save(flags); } @@ -831,7 +832,7 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu, /* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */ if (sync) { llq.prod = queue_inc_prod_n(&llq, n); - ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq); + ret = arm_smmu_cmdq_poll_until_sync(smmu, cmdq, &llq); if (ret) { dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n", From patchwork Tue Aug 31 02:59:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2256C4320A for ; Tue, 31 Aug 2021 03:08:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A96016109E for ; Tue, 31 Aug 2021 03:08:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239733AbhHaDJA (ORCPT ); Mon, 30 Aug 2021 23:09:00 -0400 Received: from mail-bn8nam12on2058.outbound.protection.outlook.com ([40.107.237.58]:38753 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239499AbhHaDIb (ORCPT ); Mon, 30 Aug 2021 23:08:31 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jg7BPAWLhXDhRvKEar3yOrmd8jTiF8Ut+qaLU9Xg+ApqPgAb6cva/JA9yNbfTqYMUUgnQKtj9MLpYVs7n0m5/Lq+tfVYQX7qohJ03JoM077Er/cpefdYvXVvCwZ7P0oexMLqEiNFAB5E1BMffnFIQrciWFacpg+X0J285N+TjZgZ1TGeRf76VqTsyWeGBJ50HbijUhkcXj6rCN31/rQxCH/hAzkkwufj16y6D16cZTAPPA3cMYAMU08Xk0Ml5vXED7dhE6JszeBCcHgIwvIQAwlKGkBkhHZZl9fw/9ggD99hN475lnwBYB24/X6L57bhj3Ona1gXIp+7nNjEEtV9Vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4q+ODDv46OwMeWVxF1FFigBBMi3IbtSEoUn2InhM6po=; b=ZR5xYMpLV4iNwzG0M9yZ6kh6HbZHxEyfLaCB2pWs/i0aqAS9vS7jui8ZT0UHMhfBb2c07EDuPMB41UOzdNlRG+UVXmQmyf6JxFXe9foKK1yM+goZtVTj9eDTvZ/SzjtMpa2HGXGnzbr/VuZ8pDNlUDaAHYQaC+0n5YmOACtrXrMPzzuwGZmSuCsgaU05ivZ2q6NDvFYggCXthZT1l7/9G6Gh10FkkHbkpJ+ZLPd+VaZ3XDnnTBLsCfUiJsNQOWsqVHCpKZHa/RYX6Wr6IdFdj5XIlkJ8EJxZzVlRlAJtxvTsN0NgbbPOPk1AL/ebAm8YIc9DSBTpB1DoQEGO1o10dw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4q+ODDv46OwMeWVxF1FFigBBMi3IbtSEoUn2InhM6po=; b=cMe4HMkhUylX45lGyZPrri1Q5IiNp5z7UQUqkTrstpIL5QUqYV0pRDR3uEHp1RIpPm6tj1pcN4jMyrU2tcnw+7YiOJzJacbN1fHovM+/OrQLIouOv2af6YCBu7LqDMFnUzK+e63vksOs11AxevGDJw5GNcHQr+Qv3UlA+fs0zeqW5Ob/xb+SzOTmfUMiNs89xIakM1bHrfbP2RjPGy4iWfLCdWaK7p9NXtDA9wviZ2rC0+SQMTdCSpKsBE32wqgxYrELQVb+oiLwb6DLG7Sr2VITQnNQ5E56RYB5fJtlCDgxzId9QaoKSU9oB5VaZczZ8qr4oJuiKT8APzm4rQZADg== Received: from BN6PR16CA0041.namprd16.prod.outlook.com (2603:10b6:405:14::27) by SN1PR12MB2430.namprd12.prod.outlook.com (2603:10b6:802:30::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.17; Tue, 31 Aug 2021 03:07:35 +0000 Received: from BN8NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:405:14:cafe::70) by BN6PR16CA0041.outlook.office365.com (2603:10b6:405:14::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by BN8NAM11FT026.mail.protection.outlook.com (10.13.177.51) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:34 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 30 Aug 2021 20:07:32 -0700 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:32 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 11/13] iommu/arm-smmu-v3: Add implementation infrastructure Date: Mon, 30 Aug 2021 19:59:21 -0700 Message-ID: <20210831025923.15812-12-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2afdddf8-9d34-45b1-7101-08d96c2c7b09 X-MS-TrafficTypeDiagnostic: SN1PR12MB2430: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2512; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Pr8d9RCZajpsf5YFtdIZQ+IYwEEVNsaxJ+lUtBDvuFVOO5+0boWYjXoNB+2f5GKbD6qltWQ28BU1baul07pl+A0YH/lx+eoB8kj+FJtE/z8J5yAAjrzBBE6ERmfQlxz7mOy+MWK0Zw2xBlGZMZibcWfnyKupWkTtbWwqJHi49mAJf5uZcT/XYLW5gIU3Jv3gnl6We+GUUFRR86d7Xu0HN29Zsq6zMVVhrzJqyI/xcDCXXJNCTY2Pur/8qpSbsp94Ehxl6sPhlqhGKRIGlogbNLwrdU4zhPDO/hGx/3DLZUGQjldyH61IpmAt3XvGpK+Q3FfmOjn5HrF4Hy0ajB3CErJWtT+YI3GviWlJyW7FqT7uRx+OT80VOVWr/2kTPae1dYlIkMOSK5Ngx8uk3/qXJiMnQUHkYx7THxIsMPrghxbmnbToM1N4WwKNhKQFZ6ANNMZvDTptTA4uHa1zvHf+WL1lieJkPnT+Yf077bKy5+3e9ETkrWalG2DUv7qWEQaEW7O9zlzTS5id4Zkl0ZgfVMNC4aPyU0YBxU4dD6JNAiTj9lBQs9txrCqXurKgr7vpdPVHDkgCT+YklrVPTy94RRcWNZB4JI9WstCPzXNgixjf8WLawHjwa0JyeKJtgH3EudkahGB5Y2KJCmiXs8g6UYcFu+kxhcAExuc2/Dcd8GpDm90FJ4idXMnInXi5YE2BKc9YPpVWN6k0POaC427GvfHLnNv0bAEoWYf6mvmGBSuZcnSgdqZ8iZ0sI7+EopQN X-Forefront-Antispam-Report: CIP:216.228.112.32;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid01.nvidia.com;CAT:NONE;SFS:(4636009)(136003)(346002)(39860400002)(376002)(396003)(46966006)(36840700001)(336012)(186003)(36860700001)(83380400001)(7636003)(70586007)(70206006)(82740400003)(54906003)(5660300002)(36756003)(2906002)(26005)(426003)(47076005)(2616005)(8676002)(82310400003)(478600001)(356005)(1076003)(110136005)(7416002)(316002)(4326008)(6666004)(7696005)(86362001)(8936002)(2101003)(309714004);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:34.6493 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2afdddf8-9d34-45b1-7101-08d96c2c7b09 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.32];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB2430 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Nate Watterson Follow arm-smmu driver's infrastructure for handling implementation specific details outside the flow of architectural code. Signed-off-by: Nate Watterson Signed-off-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/Makefile | 2 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c | 8 ++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 4 ++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 4 ++++ 4 files changed, 17 insertions(+), 1 deletion(-) create mode 100644 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c diff --git a/drivers/iommu/arm/arm-smmu-v3/Makefile b/drivers/iommu/arm/arm-smmu-v3/Makefile index 54feb1ecccad..1f5838d3351b 100644 --- a/drivers/iommu/arm/arm-smmu-v3/Makefile +++ b/drivers/iommu/arm/arm-smmu-v3/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_ARM_SMMU_V3) += arm_smmu_v3.o -arm_smmu_v3-objs-y += arm-smmu-v3.o +arm_smmu_v3-objs-y += arm-smmu-v3.o arm-smmu-v3-impl.o arm_smmu_v3-objs-$(CONFIG_ARM_SMMU_V3_SVA) += arm-smmu-v3-sva.o arm_smmu_v3-objs := $(arm_smmu_v3-objs-y) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c new file mode 100644 index 000000000000..6947d28067a8 --- /dev/null +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c @@ -0,0 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "arm-smmu-v3.h" + +struct arm_smmu_device *arm_smmu_v3_impl_init(struct arm_smmu_device *smmu) +{ + return smmu; +} diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 216f3442aac4..510e1493fd5a 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3844,6 +3844,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev) return ret; } + smmu = arm_smmu_v3_impl_init(smmu); + if (IS_ERR(smmu)) + return PTR_ERR(smmu); + /* Set bypass mode according to firmware probing result */ bypass = !!ret; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 20463d17fd9f..c65c39336916 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -810,4 +810,8 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle) static inline void arm_smmu_sva_notifier_synchronize(void) {} #endif /* CONFIG_ARM_SMMU_V3_SVA */ + +/* Implementation details */ +struct arm_smmu_device *arm_smmu_v3_impl_init(struct arm_smmu_device *smmu); + #endif /* _ARM_SMMU_V3_H */ From patchwork Tue Aug 31 02:59:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 217BCC432BE for ; Tue, 31 Aug 2021 03:08:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EFE2061027 for ; Tue, 31 Aug 2021 03:08:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239804AbhHaDJE (ORCPT ); Mon, 30 Aug 2021 23:09:04 -0400 Received: from mail-dm3nam07on2063.outbound.protection.outlook.com ([40.107.95.63]:51872 "EHLO NAM02-DM3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229723AbhHaDIc (ORCPT ); Mon, 30 Aug 2021 23:08:32 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CfTJokrU+47o8SSVDaQFlznEYNIOPbUHBQXzUD5SqOaApTsffiBC1BtNH5dOnVNPLKK5GDNmpa2MCxTGJDwIZHHHLc+RFP2H44+P1DOGwWyMa7JyBfFuQHNLrEj5SeJuudHGGPfrR6D60diADrqKQ3YWZvblbR+avc9RHat0nxnGdMO84oLNlU1LaTCDR26j/j9CO/cz1sv65MS6qS29gVUR6otjK4hKkRHas3FFlHx7CHx7BerMF19KlmaBeDBPE6jO3EBiBE8v2LyNrFDLUFVSUsA3ZbWFwFFIC13LsUSFV0vR9pTNIHdEaOL3ggtYZMnP1pcnBt52VGExblCeYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MJ0o5bMZrWvzcXcCC6oaT/Mg1xmI76laas2vbwCzNyY=; b=fx9LMqakDLc2c7rrp5sft4EQiJJLeYmiJGDDz1fZzxBCBK3GR40VHWFOGJFF5i/zwioCbI71EiN0V5kQAb4P+T35gNfxKAghsqboJSvgDfdF2LxuvsQ+/iRanuKm8OBDQc1POn6UJvm5IGmAJfUduYu1FRnkUUB0Jb+keAhAdbKskMhDJkYUB3NmDYCTP408jdIhNeTAORY6xfNmhUJMJ3UeYtE7XfWl6cH4KdPcJoXFzvdOhlenjZGEhyUUKquElMVEwp/XQjZzYPvmPw70p1ooJZK8FYgQt0dLZJmdZXSkWU8uTdwxQ81Rbhhpox7WtwvhIiBSbzh9aJArMucEJA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=huawei.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MJ0o5bMZrWvzcXcCC6oaT/Mg1xmI76laas2vbwCzNyY=; b=pl5YRyCXMHgYlo0hOd/WHVthgLJSNjxRwBQRHLaDHgFzSKM3qBpHtZI0k4H6/lR9p6VBxgMS+FVIFG/0PkI/GrYV00pm1dS7aTBfgV6hvkc8VrtVI3hQpmC9AsEtgD09bh1zCpxuvxIQD9eXEfy5c15Lj91MwRBHiAchfj0dZEr70cm3k+MeW68e0TXyMJbMJ7w2VJWV2ANulzvHx212+cxANWM2w79WfAKDLp6VEW/dMEdpfzVMYleFsWC2UFKOpGBM+XJWniH+nea4pFE8cOt8isP2aHDp1HOwgr3wpfRqVafwAsAyXQY6DdFBSJJYHFgVim9hRb/TWlG+YpGJPw== Received: from BN9P222CA0024.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::29) by BY5PR12MB3746.namprd12.prod.outlook.com (2603:10b6:a03:1a7::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.21; Tue, 31 Aug 2021 03:07:34 +0000 Received: from BN8NAM11FT025.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10c:cafe::5a) by BN9P222CA0024.outlook.office365.com (2603:10b6:408:10c::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4478.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; huawei.com; dkim=none (message not signed) header.d=none;huawei.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT025.mail.protection.outlook.com (10.13.177.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:34 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:32 +0000 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:32 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 12/13] iommu/arm-smmu-v3: Add support for NVIDIA CMDQ-Virtualization hw Date: Mon, 30 Aug 2021 19:59:22 -0700 Message-ID: <20210831025923.15812-13-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 64a16d72-351a-4a37-e383-08d96c2c7af2 X-MS-TrafficTypeDiagnostic: BY5PR12MB3746: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7NTCVP9TuoaWTMaq0yGCUXElIjoSb3iZzw1aHNsKqpuFlWz+A5SW2MeI8XVloE3zWvW+UhSzzoWMexAdo3pjfsCkGXtYmlt+1DSeQB6x5/HAjPBbSf8DWcGtiiU4LTrhbqdZ8lOrPzUV9MTYFkPx0PHSKccJ6ka/lx2yOaMCpdP/4Pd8lU/5QCvMNrkBMYMSOewmfv9ry6LhrbUJRC1Zgt66h7RVB/+tdLpl/WftBXFanz2yU7brXf2U4ZsdhpF6F7v00BW1VbMjBj5iFpDqV0R/wF7HnLdYbfPUToyvjuUlqwrhM1ho+jJrxuicOveVxLQtmcs9GUTZwlE5xoAf53oQhZ7x3B9b0aIkYc0/FynkBotnJGNmXSQF35bik531eQRZP23r/M/Rfsz6S13MV2BBL8gj6CM9XZ/MAsh9Tz1rVYFCn4t3WREZBK2sJHSxaaXnGvF+CP0da2Ixrq35JVFY6MLQo0nbD4D+LQwmLigXlixSgshLHNLmJHtYbRHIBkyilIsDJAPaV3TovVx/0daOCQCTiClEj0ot7LrdgszgC/EhxFgQl5dS6+oaNhTae2XoI6kEXYQQ94QIpIGv2SlhFllrW5+79HyjHaJZUId75p/Sj/37woCerxLjD3J3WCUl9aNc6baj2J8RUO4K9f3BtZJnpP8kvB4eDltdqffv07uH9Ee2giF96LT+xvGJPiFQYdmcr6l8bUCye/il72v3P7Ugq7dqAjk9HJ9zhTE= X-Forefront-Antispam-Report: CIP:216.228.112.35;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid04.nvidia.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(46966006)(36840700001)(36906005)(47076005)(70206006)(7636003)(54906003)(36756003)(7416002)(2906002)(8676002)(186003)(356005)(83380400001)(316002)(478600001)(426003)(110136005)(82310400003)(70586007)(30864003)(82740400003)(7696005)(36860700001)(26005)(1076003)(86362001)(8936002)(6666004)(336012)(5660300002)(2616005)(4326008)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:34.5055 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 64a16d72-351a-4a37-e383-08d96c2c7af2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.35];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT025.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB3746 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Nate Watterson NVIDIA's Grace SoC has a CMDQ-Virtualization (CMDQV) hardware, which adds multiple VCMDQ interfaces (VINTFs) to supplement the architected SMMU_CMDQ in an effort to reduce contention. To make use of these supplemental CMDQs in arm-smmu-v3 driver, this patch borrows the "implemenatation infrastructure" design from the arm-smmu driver, and then adds implementation specific supports for ->device_reset() and ->get_cmdq() functions. Since nvidia's ->get_cmdq() implemenatation needs to check the first command of the cmdlist to determine whether to redirect to its own vcmdq, this patch also adds augments to arm_smmu_get_cmdq() function. For the CMDQV driver itself, this patch only adds the essential parts for the host kernel, in terms of virtualization use cases. VINTF0 is being reserved for host kernel use, so is initialized with the driver also. Note that, for the current plan, the CMDQV driver only supports ACPI configuration. Signed-off-by: Nate Watterson Signed-off-by: Nicolin Chen --- MAINTAINERS | 2 + drivers/iommu/arm/arm-smmu-v3/Makefile | 2 +- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c | 7 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 15 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 8 + .../iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c | 432 ++++++++++++++++++ 6 files changed, 463 insertions(+), 3 deletions(-) create mode 100644 drivers/iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c diff --git a/MAINTAINERS b/MAINTAINERS index f800abca74b0..7a2f21279d35 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -18428,8 +18428,10 @@ F: drivers/i2c/busses/i2c-tegra.c TEGRA IOMMU DRIVERS M: Thierry Reding R: Krishna Reddy +R: Nicolin Chen L: linux-tegra@vger.kernel.org S: Supported +F: drivers/iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c F: drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c F: drivers/iommu/tegra* diff --git a/drivers/iommu/arm/arm-smmu-v3/Makefile b/drivers/iommu/arm/arm-smmu-v3/Makefile index 1f5838d3351b..0aa84c0a50ea 100644 --- a/drivers/iommu/arm/arm-smmu-v3/Makefile +++ b/drivers/iommu/arm/arm-smmu-v3/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_ARM_SMMU_V3) += arm_smmu_v3.o -arm_smmu_v3-objs-y += arm-smmu-v3.o arm-smmu-v3-impl.o +arm_smmu_v3-objs-y += arm-smmu-v3.o arm-smmu-v3-impl.o nvidia-smmu-v3.o arm_smmu_v3-objs-$(CONFIG_ARM_SMMU_V3_SVA) += arm-smmu-v3-sva.o arm_smmu_v3-objs := $(arm_smmu_v3-objs-y) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c index 6947d28067a8..37d062e40eb5 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-impl.c @@ -4,5 +4,12 @@ struct arm_smmu_device *arm_smmu_v3_impl_init(struct arm_smmu_device *smmu) { + /* + * Nvidia implementation supports ACPI only, so calling its init() + * unconditionally to walk through ACPI tables to probe the device. + * It will keep the smmu pointer intact, if it fails. + */ + smmu = nvidia_smmu_v3_impl_init(smmu); + return smmu; } diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 510e1493fd5a..1b9459592f76 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -335,8 +335,11 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) return 0; } -static struct arm_smmu_cmdq *arm_smmu_get_cmdq(struct arm_smmu_device *smmu) +static struct arm_smmu_cmdq *arm_smmu_get_cmdq(struct arm_smmu_device *smmu, u64 *cmds, int n) { + if (smmu->impl && smmu->impl->get_cmdq) + return smmu->impl->get_cmdq(smmu, cmds, n); + return &smmu->cmdq; } @@ -742,7 +745,7 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu, u32 prod; unsigned long flags; bool owner; - struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu); + struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu, cmds, n); struct arm_smmu_ll_queue llq, head; int ret = 0; @@ -3487,6 +3490,14 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) return ret; } + if (smmu->impl && smmu->impl->device_reset) { + ret = smmu->impl->device_reset(smmu); + if (ret) { + dev_err(smmu->dev, "failed at implementation specific device_reset\n"); + return ret; + } + } + return 0; } diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index c65c39336916..bb903a7fa662 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -647,6 +647,8 @@ struct arm_smmu_device { #define ARM_SMMU_OPT_MSIPOLL (1 << 2) u32 options; + const struct arm_smmu_impl *impl; + struct arm_smmu_cmdq cmdq; struct arm_smmu_evtq evtq; struct arm_smmu_priq priq; @@ -812,6 +814,12 @@ static inline void arm_smmu_sva_notifier_synchronize(void) {} #endif /* CONFIG_ARM_SMMU_V3_SVA */ /* Implementation details */ +struct arm_smmu_impl { + int (*device_reset)(struct arm_smmu_device *smmu); + struct arm_smmu_cmdq *(*get_cmdq)(struct arm_smmu_device *smmu, u64 *cmds, int n); +}; + struct arm_smmu_device *arm_smmu_v3_impl_init(struct arm_smmu_device *smmu); +struct arm_smmu_device *nvidia_smmu_v3_impl_init(struct arm_smmu_device *smmu); #endif /* _ARM_SMMU_V3_H */ diff --git a/drivers/iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c new file mode 100644 index 000000000000..0c92fe433c6e --- /dev/null +++ b/drivers/iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c @@ -0,0 +1,432 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define dev_fmt(fmt) "nvidia_smmu_cmdqv: " fmt + +#include +#include +#include +#include +#include +#include + +#include + +#include "arm-smmu-v3.h" + +#define NVIDIA_SMMU_CMDQV_HID "NVDA0600" + +/* CMDQV register page base and size defines */ +#define NVIDIA_CMDQV_CONFIG_BASE (0) +#define NVIDIA_CMDQV_CONFIG_SIZE (SZ_64K) +#define NVIDIA_VCMDQ_BASE (0 + SZ_64K) +#define NVIDIA_VCMDQ_SIZE (SZ_64K * 2) /* PAGE0 and PAGE1 */ + +/* CMDQV global config regs */ +#define NVIDIA_CMDQV_CONFIG 0x0000 +#define CMDQV_EN BIT(0) + +#define NVIDIA_CMDQV_PARAM 0x0004 +#define CMDQV_NUM_VINTF_LOG2 GENMASK(11, 8) +#define CMDQV_NUM_VCMDQ_LOG2 GENMASK(7, 4) + +#define NVIDIA_CMDQV_STATUS 0x0008 +#define CMDQV_STATUS GENMASK(2, 1) +#define CMDQV_ENABLED BIT(0) + +#define NVIDIA_CMDQV_VINTF_ERR_MAP 0x000C +#define NVIDIA_CMDQV_VINTF_INT_MASK 0x0014 +#define NVIDIA_CMDQV_VCMDQ_ERR_MAP 0x001C + +#define NVIDIA_CMDQV_CMDQ_ALLOC(q) (0x0200 + 0x4*(q)) +#define CMDQV_CMDQ_ALLOC_VINTF GENMASK(20, 15) +#define CMDQV_CMDQ_ALLOC_LVCMDQ GENMASK(7, 1) +#define CMDQV_CMDQ_ALLOCATED BIT(0) + +/* VINTF config regs */ +#define NVIDIA_CMDQV_VINTF(v) (0x1000 + 0x100*(v)) + +#define NVIDIA_VINTF_CONFIG 0x0000 +#define VINTF_HYP_OWN BIT(17) +#define VINTF_VMID GENMASK(16, 1) +#define VINTF_EN BIT(0) + +#define NVIDIA_VINTF_STATUS 0x0004 +#define VINTF_STATUS GENMASK(3, 1) +#define VINTF_ENABLED BIT(0) + +/* VCMDQ config regs */ +/* -- PAGE0 -- */ +#define NVIDIA_CMDQV_VCMDQ(q) (NVIDIA_VCMDQ_BASE + 0x80*(q)) + +#define NVIDIA_VCMDQ_CONS 0x00000 +#define VCMDQ_CONS_ERR GENMASK(30, 24) + +#define NVIDIA_VCMDQ_PROD 0x00004 + +#define NVIDIA_VCMDQ_CONFIG 0x00008 +#define VCMDQ_EN BIT(0) + +#define NVIDIA_VCMDQ_STATUS 0x0000C +#define VCMDQ_ENABLED BIT(0) + +#define NVIDIA_VCMDQ_GERROR 0x00010 +#define NVIDIA_VCMDQ_GERRORN 0x00014 + +/* -- PAGE1 -- */ +#define NVIDIA_VCMDQ_BASE_L(q) (NVIDIA_CMDQV_VCMDQ(q) + SZ_64K) +#define VCMDQ_ADDR GENMASK(63, 5) +#define VCMDQ_LOG2SIZE GENMASK(4, 0) + +struct nvidia_smmu_vintf { + u16 idx; + u32 cfg; + u32 status; + + void __iomem *base; + struct arm_smmu_cmdq *vcmdqs; +}; + +struct nvidia_smmu { + struct arm_smmu_device smmu; + + struct device *cmdqv_dev; + void __iomem *cmdqv_base; + int cmdqv_irq; + + /* CMDQV Hardware Params */ + u16 num_total_vintfs; + u16 num_total_vcmdqs; + u16 num_vcmdqs_per_vintf; + + /* CMDQV_VINTF(0) reserved for host kernel use */ + struct nvidia_smmu_vintf vintf0; +}; + +static irqreturn_t nvidia_smmu_cmdqv_isr(int irq, void *devid) +{ + struct nvidia_smmu *nsmmu = (struct nvidia_smmu *)devid; + struct nvidia_smmu_vintf *vintf0 = &nsmmu->vintf0; + u32 vintf_err_map[2]; + u32 vcmdq_err_map[4]; + + vintf_err_map[0] = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_VINTF_ERR_MAP); + vintf_err_map[1] = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_VINTF_ERR_MAP + 0x4); + + vcmdq_err_map[0] = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_VCMDQ_ERR_MAP); + vcmdq_err_map[1] = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_VCMDQ_ERR_MAP + 0x4); + vcmdq_err_map[2] = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_VCMDQ_ERR_MAP + 0x8); + vcmdq_err_map[3] = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_VCMDQ_ERR_MAP + 0xC); + + dev_warn(nsmmu->cmdqv_dev, + "unexpected cmdqv error reported: vintf_map %08X %08X, vcmdq_map %08X %08X %08X %08X\n", + vintf_err_map[0], vintf_err_map[1], vcmdq_err_map[0], vcmdq_err_map[1], + vcmdq_err_map[2], vcmdq_err_map[3]); + + /* If the error was reported by vintf0, avoid using any of its VCMDQs */ + if (vintf_err_map[vintf0->idx / 32] & (1 << (vintf0->idx % 32))) { + vintf0->status = readl_relaxed(vintf0->base + NVIDIA_VINTF_STATUS); + + dev_warn(nsmmu->cmdqv_dev, "error (0x%lX) reported by host vintf0 - disabling its vcmdqs\n", + FIELD_GET(VINTF_STATUS, vintf0->status)); + } else if (vintf_err_map[0] || vintf_err_map[1]) { + dev_err(nsmmu->cmdqv_dev, "cmdqv error interrupt triggered by unassigned vintf!\n"); + } + + return IRQ_HANDLED; +} + +/* Adapt struct arm_smmu_cmdq init sequences from arm-smmu-v3.c for VCMDQs */ +static int nvidia_smmu_init_one_arm_smmu_cmdq(struct nvidia_smmu *nsmmu, + struct arm_smmu_cmdq *cmdq, + void __iomem *vcmdq_base, + u16 qidx) +{ + struct arm_smmu_queue *q = &cmdq->q; + size_t qsz; + + /* struct arm_smmu_cmdq config normally done in arm_smmu_device_hw_probe() */ + q->llq.max_n_shift = ilog2(SZ_64K >> CMDQ_ENT_SZ_SHIFT); + + /* struct arm_smmu_cmdq config normally done in arm_smmu_init_one_queue() */ + qsz = (1 << q->llq.max_n_shift) << CMDQ_ENT_SZ_SHIFT; + q->base = dmam_alloc_coherent(nsmmu->cmdqv_dev, qsz, &q->base_dma, GFP_KERNEL); + if (!q->base) { + dev_err(nsmmu->cmdqv_dev, "failed to allocate 0x%zX bytes for VCMDQ%u\n", + qsz, qidx); + return -ENOMEM; + } + dev_dbg(nsmmu->cmdqv_dev, "allocated %u entries for VCMDQ%u @ 0x%llX [%pad] ++ %zX", + 1 << q->llq.max_n_shift, qidx, (u64)q->base, &q->base_dma, qsz); + + q->prod_reg = vcmdq_base + NVIDIA_VCMDQ_PROD; + q->cons_reg = vcmdq_base + NVIDIA_VCMDQ_CONS; + q->ent_dwords = CMDQ_ENT_DWORDS; + + q->q_base = q->base_dma & VCMDQ_ADDR; + q->q_base |= FIELD_PREP(VCMDQ_LOG2SIZE, q->llq.max_n_shift); + + q->llq.prod = q->llq.cons = 0; + + /* struct arm_smmu_cmdq config normally done in arm_smmu_cmdq_init() */ + atomic_set(&cmdq->owner_prod, 0); + atomic_set(&cmdq->lock, 0); + + cmdq->valid_map = (atomic_long_t *)bitmap_zalloc(1 << q->llq.max_n_shift, GFP_KERNEL); + if (!cmdq->valid_map) { + dev_err(nsmmu->cmdqv_dev, "failed to allocate valid_map for VCMDQ%u\n", qidx); + return -ENOMEM; + } + + return 0; +} + +static int nvidia_smmu_cmdqv_init(struct nvidia_smmu *nsmmu) +{ + struct nvidia_smmu_vintf *vintf0 = &nsmmu->vintf0; + u32 regval; + u16 qidx; + int ret; + + /* Setup vintf0 for host kernel */ + vintf0->idx = 0; + vintf0->base = nsmmu->cmdqv_base + NVIDIA_CMDQV_VINTF(0); + + regval = FIELD_PREP(VINTF_HYP_OWN, nsmmu->num_total_vintfs > 1); + writel_relaxed(regval, vintf0->base + NVIDIA_VINTF_CONFIG); + + regval |= FIELD_PREP(VINTF_EN, 1); + writel_relaxed(regval, vintf0->base + NVIDIA_VINTF_CONFIG); + + vintf0->cfg = regval; + + ret = readl_relaxed_poll_timeout(vintf0->base + NVIDIA_VINTF_STATUS, + regval, regval == VINTF_ENABLED, + 1, ARM_SMMU_POLL_TIMEOUT_US); + vintf0->status = regval; + if (ret) { + dev_err(nsmmu->cmdqv_dev, "failed to enable VINTF%u: STATUS = 0x%08X\n", + vintf0->idx, regval); + return ret; + } + + /* Allocate vcmdqs to vintf0 */ + for (qidx = 0; qidx < nsmmu->num_vcmdqs_per_vintf; qidx++) { + regval = FIELD_PREP(CMDQV_CMDQ_ALLOC_VINTF, vintf0->idx); + regval |= FIELD_PREP(CMDQV_CMDQ_ALLOC_LVCMDQ, qidx); + regval |= CMDQV_CMDQ_ALLOCATED; + writel_relaxed(regval, nsmmu->cmdqv_base + NVIDIA_CMDQV_CMDQ_ALLOC(qidx)); + } + + /* Build an arm_smmu_cmdq for each vcmdq allocated to vintf0 */ + vintf0->vcmdqs = devm_kcalloc(nsmmu->cmdqv_dev, nsmmu->num_vcmdqs_per_vintf, + sizeof(*vintf0->vcmdqs), GFP_KERNEL); + if (!vintf0->vcmdqs) + return -ENOMEM; + + for (qidx = 0; qidx < nsmmu->num_vcmdqs_per_vintf; qidx++) { + void __iomem *vcmdq_base = nsmmu->cmdqv_base + NVIDIA_CMDQV_VCMDQ(qidx); + struct arm_smmu_cmdq *cmdq = &vintf0->vcmdqs[qidx]; + + /* Setup struct arm_smmu_cmdq data members */ + nvidia_smmu_init_one_arm_smmu_cmdq(nsmmu, cmdq, vcmdq_base, qidx); + + /* Configure and enable the vcmdq */ + writel_relaxed(0, vcmdq_base + NVIDIA_VCMDQ_PROD); + writel_relaxed(0, vcmdq_base + NVIDIA_VCMDQ_CONS); + + writeq_relaxed(cmdq->q.q_base, nsmmu->cmdqv_base + NVIDIA_VCMDQ_BASE_L(qidx)); + + writel_relaxed(VCMDQ_EN, vcmdq_base + NVIDIA_VCMDQ_CONFIG); + ret = readl_poll_timeout(vcmdq_base + NVIDIA_VCMDQ_STATUS, + regval, regval == VCMDQ_ENABLED, + 1, ARM_SMMU_POLL_TIMEOUT_US); + if (ret) { + u32 gerror = readl_relaxed(vcmdq_base + NVIDIA_VCMDQ_GERROR); + u32 gerrorn = readl_relaxed(vcmdq_base + NVIDIA_VCMDQ_GERRORN); + u32 cons = readl_relaxed(vcmdq_base + NVIDIA_VCMDQ_CONS); + + dev_err(nsmmu->cmdqv_dev, + "failed to enable VCMDQ%u: GERROR=0x%X, GERRORN=0x%X, CONS=0x%X\n", + qidx, gerror, gerrorn, cons); + return ret; + } + + dev_info(nsmmu->cmdqv_dev, "VCMDQ%u allocated to VINTF%u as logical-VCMDQ%u\n", + qidx, vintf0->idx, qidx); + } + + return 0; +} + +static int nvidia_smmu_probe(struct nvidia_smmu *nsmmu) +{ + struct platform_device *cmdqv_pdev = to_platform_device(nsmmu->cmdqv_dev); + struct resource *res; + u32 regval; + + /* Base address */ + res = platform_get_resource(cmdqv_pdev, IORESOURCE_MEM, 0); + if (!res) + return -ENXIO; + + nsmmu->cmdqv_base = devm_ioremap_resource(nsmmu->cmdqv_dev, res); + if (IS_ERR(nsmmu->cmdqv_base)) + return PTR_ERR(nsmmu->cmdqv_base); + + /* Interrupt */ + nsmmu->cmdqv_irq = platform_get_irq(cmdqv_pdev, 0); + if (nsmmu->cmdqv_irq < 0) { + dev_warn(nsmmu->cmdqv_dev, "no cmdqv interrupt - errors will not be reported\n"); + nsmmu->cmdqv_irq = 0; + } + + /* Probe the h/w */ + regval = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_CONFIG); + if (!FIELD_GET(CMDQV_EN, regval)) { + dev_err(nsmmu->cmdqv_dev, "CMDQV h/w is disabled: CMDQV_CONFIG=0x%08X\n", regval); + return -ENODEV; + } + + regval = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_STATUS); + if (!FIELD_GET(CMDQV_ENABLED, regval) || FIELD_GET(CMDQV_STATUS, regval)) { + dev_err(nsmmu->cmdqv_dev, "CMDQV h/w not ready: CMDQV_STATUS=0x%08X\n", regval); + return -ENODEV; + } + + regval = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_PARAM); + nsmmu->num_total_vintfs = 1 << FIELD_GET(CMDQV_NUM_VINTF_LOG2, regval); + nsmmu->num_total_vcmdqs = 1 << FIELD_GET(CMDQV_NUM_VCMDQ_LOG2, regval); + nsmmu->num_vcmdqs_per_vintf = nsmmu->num_total_vcmdqs / nsmmu->num_total_vintfs; + + return 0; +} + +static struct arm_smmu_cmdq *nvidia_smmu_get_cmdq(struct arm_smmu_device *smmu, u64 *cmds, int n) +{ + struct nvidia_smmu *nsmmu = (struct nvidia_smmu *)smmu; + struct nvidia_smmu_vintf *vintf0 = &nsmmu->vintf0; + u16 qidx; + + /* Make sure vintf0 is enabled and healthy */ + if (vintf0->status != VINTF_ENABLED) + return &smmu->cmdq; + + /* Check for illegal CMDs */ + if (!FIELD_GET(VINTF_HYP_OWN, vintf0->cfg)) { + u64 opcode = (n) ? FIELD_GET(CMDQ_0_OP, cmds[0]) : CMDQ_OP_CMD_SYNC; + + /* List all non-illegal CMDs for cmdq overriding */ + switch (opcode) { + case CMDQ_OP_TLBI_NH_ASID: + case CMDQ_OP_TLBI_NH_VA: + case CMDQ_OP_TLBI_S12_VMALL: + case CMDQ_OP_TLBI_S2_IPA: + case CMDQ_OP_ATC_INV: + break; + default: + /* Skip overriding for illegal CMDs */ + return &smmu->cmdq; + } + } + + /* + * Select a vcmdq to use. Here we use a temporal solution to + * balance out traffic on cmdq issuing: each cmdq has its own + * lock, if all cpus issue cmdlist using the same cmdq, only + * one CPU at a time can enter the process, while the others + * will be spinning at the same lock. + */ + qidx = smp_processor_id() % nsmmu->num_vcmdqs_per_vintf; + return &vintf0->vcmdqs[qidx]; +} + +static int nvidia_smmu_device_reset(struct arm_smmu_device *smmu) +{ + struct nvidia_smmu *nsmmu = (struct nvidia_smmu *)smmu; + int ret; + + ret = nvidia_smmu_cmdqv_init(nsmmu); + if (ret) + return ret; + + if (nsmmu->cmdqv_irq) { + ret = devm_request_irq(nsmmu->cmdqv_dev, nsmmu->cmdqv_irq, nvidia_smmu_cmdqv_isr, + IRQF_SHARED, "nvidia-smmu-cmdqv", nsmmu); + if (ret) { + dev_err(nsmmu->cmdqv_dev, "failed to claim irq (%d): %d\n", + nsmmu->cmdqv_irq, ret); + return ret; + } + } + + /* Disable FEAT_MSI and OPT_MSIPOLL since VCMDQs only support CMD_SYNC w/CS_NONE */ + smmu->features &= ~ARM_SMMU_FEAT_MSI; + smmu->options &= ~ARM_SMMU_OPT_MSIPOLL; + + return 0; +} + +const struct arm_smmu_impl nvidia_smmu_impl = { + .device_reset = nvidia_smmu_device_reset, + .get_cmdq = nvidia_smmu_get_cmdq, +}; + +#ifdef CONFIG_ACPI +struct nvidia_smmu *nvidia_smmu_create(struct arm_smmu_device *smmu) +{ + struct nvidia_smmu *nsmmu = NULL; + struct acpi_iort_node *node; + struct acpi_device *adev; + struct device *cmdqv_dev; + const char *match_uid; + + if (acpi_disabled) + return NULL; + + /* Look for a device in the DSDT whose _UID matches the SMMU's iort_node identifier */ + node = *(struct acpi_iort_node **)dev_get_platdata(smmu->dev); + match_uid = kasprintf(GFP_KERNEL, "%u", node->identifier); + adev = acpi_dev_get_first_match_dev(NVIDIA_SMMU_CMDQV_HID, match_uid, -1); + kfree(match_uid); + + if (!adev) + return NULL; + + cmdqv_dev = bus_find_device_by_acpi_dev(&platform_bus_type, adev); + if (!cmdqv_dev) + return NULL; + + dev_info(smmu->dev, "found companion CMDQV device, %s", dev_name(cmdqv_dev)); + + nsmmu = devm_krealloc(smmu->dev, smmu, sizeof(*nsmmu), GFP_KERNEL); + if (!nsmmu) + return ERR_PTR(-ENOMEM); + + nsmmu->cmdqv_dev = cmdqv_dev; + + return nsmmu; +} +#else +struct nvidia_smmu *nvidia_smmu_create(struct arm_smmu_device *smmu) +{ + return NULL; +} +#endif + +struct arm_smmu_device *nvidia_smmu_v3_impl_init(struct arm_smmu_device *smmu) +{ + struct nvidia_smmu *nsmmu; + int ret; + + nsmmu = nvidia_smmu_create(smmu); + if (!nsmmu) + return smmu; + + ret = nvidia_smmu_probe(nsmmu); + if (ret) + return ERR_PTR(ret); + + nsmmu->smmu.impl = &nvidia_smmu_impl; + + return &nsmmu->smmu; +} From patchwork Tue Aug 31 02:59:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 12466357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79BD4C432BE for ; Tue, 31 Aug 2021 03:08:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 613A36101C for ; Tue, 31 Aug 2021 03:08:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239639AbhHaDIn (ORCPT ); Mon, 30 Aug 2021 23:08:43 -0400 Received: from mail-co1nam11on2046.outbound.protection.outlook.com ([40.107.220.46]:15841 "EHLO NAM11-CO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239495AbhHaDIb (ORCPT ); Mon, 30 Aug 2021 23:08:31 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=W1xFxKlBM8mJcOJLmQPxtTQ0MYYdJWf4bxR8sNURG3akD6PnJzOI+KvvBRCstIHMeTidY2eWNUYUIVQgmalsj/5GnGVFWozWYZ5EHOEcsNNJDMhn0zNxKCUwhrZIDuf5a7Rq9NAFgwhRh7k7KLZM8GZg+Dnr7Wyo9IbG2K84mBHXBdYKC4Qg8jfCkCtIx0FDBWcC2xviyDmXE+CsbSTXKV/n6zGH+LWByTYNPPuXdKmlMdCn4kVDMdQ6tc9ViTbJ0n636xLD8yjd+4iJ7gGd0yST4xOVVHq3WBOXonEcgacfbcWcNBynxQRGYRFANm+hk0UTtvVykxS4tlv9tluaDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oBQR3PlLgKXHIAgPGhDZTAgNlzXuAHQFjOqpjbPiCFM=; b=CruqpJEkdRMmSQOm4Y1mMPqs0xPIiVTk+5Oe5slgJkwXUmQIygkiOJyjcZ22OEMuc+hu68O9j5aGf1FD9QjFPfluakZSTx7+Q5ZsPGR26/s4tMhfA0bcSiAvcWDmxfU/leqFuGTm7DI4dQame+Q3rddu+H4WSUmS6vzi5ojseA1gM2jcd71ew2l7yV+9UQpiQZhi0IaAo/GdP7oEOvqIBU0aDUNEBa5n6n0uTlqZ94ZYtAavFrwsbiORwwOdSplhVH8QDEyiIgpJZmCxh91nwI9ukWPdtWwZgWeSk6ZJ3O8ljOqTmfTXSjizsB0/Rzb67fM6g4gfjTi4Gqv5MEYKug== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=8bytes.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oBQR3PlLgKXHIAgPGhDZTAgNlzXuAHQFjOqpjbPiCFM=; b=gU6YasDl9hRIdW5m8LWzz9e+D3ibqfuruUQTKoDufnUGoNTo2dr1eB4sRuy+WFzP/X5H8FnuWo73WLQeebXTLFWy4/Xfo/QBohPeoMoyccbIIvV5lh25GmCEmFpdej1mLgEqNDSABz5Re62W6aLKTMv/3H7eu5uohmXqR6nl/HK7dmQzI186cv9aVs/C2+QX4menEHRyWB2JEN8Gf0tWiPgYrTX/3egMpPoMBp0EQVddSPgZZAiO5DMx4WYdE6VRgEdvFWDKCZsXwL6Ka1UqrUVMOFYeyjTL9YDkZT4hx8XqhBOAc1P2TlvN4fwfktxRPrV9s34peIvK2115XX3aog== Received: from MW2PR16CA0002.namprd16.prod.outlook.com (2603:10b6:907::15) by DM5PR12MB2405.namprd12.prod.outlook.com (2603:10b6:4:b2::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.23; Tue, 31 Aug 2021 03:07:34 +0000 Received: from CO1NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:907:0:cafe::83) by MW2PR16CA0002.outlook.office365.com (2603:10b6:907::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4457.19 via Frontend Transport; Tue, 31 Aug 2021 03:07:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; 8bytes.org; dkim=none (message not signed) header.d=none;8bytes.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT052.mail.protection.outlook.com (10.13.174.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4457.17 via Frontend Transport; Tue, 31 Aug 2021 03:07:33 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:33 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 31 Aug 2021 03:07:32 +0000 Received: from Asurada-Nvidia.nvidia.com (172.20.187.5) by mail.nvidia.com (172.20.187.10) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 31 Aug 2021 03:07:32 +0000 From: Nicolin Chen To: , , , , , CC: , , , , , , , , , , , , , , , Subject: [RFC][PATCH v2 13/13] iommu/nvidia-smmu-v3: Add mdev interface support Date: Mon, 30 Aug 2021 19:59:23 -0700 Message-ID: <20210831025923.15812-14-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210831025923.15812-1-nicolinc@nvidia.com> References: <20210831025923.15812-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2c2e30b6-3247-4409-1714-08d96c2c7a5f X-MS-TrafficTypeDiagnostic: DM5PR12MB2405: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:513; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ehHsCphK9pRPOvXBhfEy6AYLy0tT0mAZ3Ai5rBCFDW5tVOb4nx7EpzBRGFuxYxwIMte11bHSBg7zgdGZ1DM6l0ioUKTke80bZLBUg0uOASWyx/AgaR7ArSZuM+eUyCTtU7hujkZT21ce4OnpDSFZRN23Ja+jdttfqd7aqjBB09OAaxQ+1mWkValMtVjqQw1BTG4pEaINDxEtMPay8Kv+Zi4ua1rerkwGImFNW9CAFCPWGQkH6nBSWD3nCm3rlEoxihUnL5avyWwXXA8zbssiIP71fd8T4zujF+l5tY5wpF4s5FTSpOAjrn/DuMntKudQg+2VTmqUVX46lKhg97eAo1kiRyZ2vc/F/bIKJgkmEBQA0n8qAU9nn1xmreWODiSxQv9KAzh9l/88b/cEVuGkgjdU7h++zyk8zpIIWzfwljGnDUurQ7cddhe0saCcMxeCNKFfRAFi6e1wT8BWbuXY9QpHe1ll3TFbNsUgo1ksCh6BFB39Xp7EYgdNTpAX+1xecQEnBrKBsacbv8eRAa9vCSct7kvRbIfa/UpmA7ffDKYFGY0G6NdeAmkwsBGdLwAXMmXpBXEk6j+oozIfrlvYYRA3UO8fXjI+aDsYI89AXarXYC7KhGwnu6KzVhmuPCwhW42vvIjH9+1kuqGZDSrScIRHcuxfkyOBQT+1s/IblFHeLMYDHhQ+1H58p4JaeUHGgsznOwvpf6x0dWsswPBLuhQ0kztntlpnsK4He/IZ5c8Pn30fVKZundwJejwDiHKz X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(46966006)(36840700001)(36860700001)(2906002)(36906005)(83380400001)(5660300002)(86362001)(7416002)(426003)(4326008)(508600001)(186003)(336012)(70586007)(70206006)(7696005)(36756003)(2616005)(110136005)(26005)(356005)(30864003)(7636003)(54906003)(6666004)(8936002)(82310400003)(316002)(1076003)(47076005)(8676002)(2101003)(309714004);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2021 03:07:33.5639 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2c2e30b6-3247-4409-1714-08d96c2c7a5f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2405 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Nate Watterson This patch adds initial mdev interface support for NVIDIA SMMU CMDQV driver. The NVIDIA SMMU CMDQV module has multiple virtual interfaces (VINTFs), designed to be exposed to virtual machines running on the user space, while each VINTF can allocate dedicated VCMDQs for TLB invalidations. The hypervisor can import, to a VM, one of these interfaces via VFIO mdev interface, to get access to VINTF registers in the host kernel. Each VINTF has two pages of MMIO regions: PAGE0 and PAGE1. PAGE0 has performance sensitive registers such as CONS_INDX and PROD_INDX that should be programmed by the guest directly, so the driver has a mmap implementation via the mdev interface to let user space get acces to PAGE0 directly. PAGE1 then has two base address configuring registers where the addresses should be translated from guest PAs to host PAs, so they are handled via mdev read/write() to trap for replacements. As previous patch mentioned, VINTF0 is reserved for the host kernel (or hypervisor) use, a VINTFx (x > 0) should be allocated to a guest VM. And from the guest perspective, the VINTFx (host) is seen as the VINTF0 of the guest. Beside the two MMIO regions of VINTF0, the guest VM also has the global configuration MMIO region as the host kernel does, and this global region is also handled via mdev read/write() to limit the guest to access the bits of its own. Additionally, there were a couple of issues for this implementation: 1) Setting into VINTF CONFIG register the same VMID as SMMU's s2_cfg. 2) Before enabling the VINTF, programing up-to-16 sets of SID_REPLACE and SID_MATCH registers that stores physical stream IDs of host's and corresponding virtual stream IDs of guest's respectively. And in this patch, we add a pair of ->attach_dev and ->detach_dev and implement them in the following ways: 1) For each VINTF, pre-allocating a VMID on the bitmap of arm_smmu_v3 driver to create a link between VINTF index and VMID, so either of them can be quickly looked up using the counterpart later. 2) Programming PHY_SID into SID_REPLACE (corresponding register), yet writing iommu_group_id (a fake VIRT_SID) into SID_MATCH, as it is the only shared information of a passthrough device between a host kernel and a hypervisor. So the hypervisor is responsible to match the iommu_group_id and then to replace it with a virtual SID. 3) Note that, by doing (1) the VMID is now created along with a VINTF in the nvidia_smmu_cmdqv_mdev_create() function, which is executed before a hypervisor or VM starts, comparing to previous situation: we added a few patches to let arm-smmu-v3 driver allocate a shared VMID in arm_smmu_attach_dev() function, when the first passthrough device is added to the VM. This means that, in the new situation, the shared VMID needs to be passed to the hypervisor, before any passthrough device gets attached. So, we reuse VFIO_IOMMU_GET_VMID command via the mdev ioctl interface to pass the VMID to the CMDQV device model, then to the SMMUv3 device model, so that hypervisor can set the same VMID to all IOMMU domains of passthrough devices using the previous pathway via VFIO core back to SMMUv3 driver. Signed-off-by: Nate Watterson Signed-off-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 6 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 + .../iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c | 817 ++++++++++++++++++ 3 files changed, 825 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 1b9459592f76..fc543181ddde 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2389,6 +2389,9 @@ static void arm_smmu_detach_dev(struct arm_smmu_master *master, struct device *d if (!smmu_domain) return; + if (master->smmu->impl && master->smmu->impl->detach_dev) + master->smmu->impl->detach_dev(smmu_domain, dev); + arm_smmu_disable_ats(master); spin_lock_irqsave(&smmu_domain->devices_lock, flags); @@ -2471,6 +2474,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) arm_smmu_enable_ats(master); + if (smmu->impl && smmu->impl->attach_dev) + ret = smmu->impl->attach_dev(smmu_domain, dev); + out_unlock: mutex_unlock(&smmu_domain->init_mutex); return ret; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index bb903a7fa662..a872c0d2f23c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -817,6 +817,8 @@ static inline void arm_smmu_sva_notifier_synchronize(void) {} struct arm_smmu_impl { int (*device_reset)(struct arm_smmu_device *smmu); struct arm_smmu_cmdq *(*get_cmdq)(struct arm_smmu_device *smmu, u64 *cmds, int n); + int (*attach_dev)(struct arm_smmu_domain *smmu_domain, struct device *dev); + void (*detach_dev)(struct arm_smmu_domain *smmu_domain, struct device *dev); }; struct arm_smmu_device *arm_smmu_v3_impl_init(struct arm_smmu_device *smmu); diff --git a/drivers/iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c index 0c92fe433c6e..265681ba96bc 100644 --- a/drivers/iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/nvidia-smmu-v3.c @@ -7,7 +7,10 @@ #include #include #include +#include +#include #include +#include #include @@ -20,14 +23,17 @@ #define NVIDIA_CMDQV_CONFIG_SIZE (SZ_64K) #define NVIDIA_VCMDQ_BASE (0 + SZ_64K) #define NVIDIA_VCMDQ_SIZE (SZ_64K * 2) /* PAGE0 and PAGE1 */ +#define NVIDIA_VINTF_VCMDQ_BASE (NVIDIA_VCMDQ_BASE + NVIDIA_VCMDQ_SIZE) /* CMDQV global config regs */ #define NVIDIA_CMDQV_CONFIG 0x0000 #define CMDQV_EN BIT(0) #define NVIDIA_CMDQV_PARAM 0x0004 +#define CMDQV_NUM_SID_PER_VM_LOG2 GENMASK(15, 12) #define CMDQV_NUM_VINTF_LOG2 GENMASK(11, 8) #define CMDQV_NUM_VCMDQ_LOG2 GENMASK(7, 4) +#define CMDQV_VER GENMASK(3, 0) #define NVIDIA_CMDQV_STATUS 0x0008 #define CMDQV_STATUS GENMASK(2, 1) @@ -45,6 +51,12 @@ /* VINTF config regs */ #define NVIDIA_CMDQV_VINTF(v) (0x1000 + 0x100*(v)) +#define NVIDIA_VINTFi_CONFIG(i) (NVIDIA_CMDQV_VINTF(i) + NVIDIA_VINTF_CONFIG) +#define NVIDIA_VINTFi_STATUS(i) (NVIDIA_CMDQV_VINTF(i) + NVIDIA_VINTF_STATUS) +#define NVIDIA_VINTFi_SID_MATCH(i, s) (NVIDIA_CMDQV_VINTF(i) + NVIDIA_VINTF_SID_MATCH(s)) +#define NVIDIA_VINTFi_SID_REPLACE(i, s) (NVIDIA_CMDQV_VINTF(i) + NVIDIA_VINTF_SID_REPLACE(s)) +#define NVIDIA_VINTFi_CMDQ_ERR_MAP(i) (NVIDIA_CMDQV_VINTF(i) + NVIDIA_VINTF_CMDQ_ERR_MAP) + #define NVIDIA_VINTF_CONFIG 0x0000 #define VINTF_HYP_OWN BIT(17) #define VINTF_VMID GENMASK(16, 1) @@ -54,6 +66,11 @@ #define VINTF_STATUS GENMASK(3, 1) #define VINTF_ENABLED BIT(0) +#define NVIDIA_VINTF_SID_MATCH(s) (0x0040 + 0x4*(s)) +#define NVIDIA_VINTF_SID_REPLACE(s) (0x0080 + 0x4*(s)) + +#define NVIDIA_VINTF_CMDQ_ERR_MAP 0x00C0 + /* VCMDQ config regs */ /* -- PAGE0 -- */ #define NVIDIA_CMDQV_VCMDQ(q) (NVIDIA_VCMDQ_BASE + 0x80*(q)) @@ -77,13 +94,30 @@ #define VCMDQ_ADDR GENMASK(63, 5) #define VCMDQ_LOG2SIZE GENMASK(4, 0) +#define NVIDIA_VCMDQ0_BASE_L 0x00000 /* offset to NVIDIA_VCMDQ_BASE_L(0) */ +#define NVIDIA_VCMDQ0_BASE_H 0x00004 /* offset to NVIDIA_VCMDQ_BASE_L(0) */ +#define NVIDIA_VCMDQ0_CONS_INDX_BASE_L 0x00008 /* offset to NVIDIA_VCMDQ_BASE_L(0) */ +#define NVIDIA_VCMDQ0_CONS_INDX_BASE_H 0x0000C /* offset to NVIDIA_VCMDQ_BASE_L(0) */ + +/* VINTF logical-VCMDQ regs */ +#define NVIDIA_VINTFi_VCMDQ_BASE(i) (NVIDIA_VINTF_VCMDQ_BASE + NVIDIA_VCMDQ_SIZE*(i)) +#define NVIDIA_VINTFi_VCMDQ(i, q) (NVIDIA_VINTFi_VCMDQ_BASE(i) + 0x80*(q)) + struct nvidia_smmu_vintf { u16 idx; + u16 vmid; u32 cfg; u32 status; void __iomem *base; + void __iomem *vcmdq_base; struct arm_smmu_cmdq *vcmdqs; + +#define NVIDIA_SMMU_VINTF_MAX_SIDS 16 + DECLARE_BITMAP(sid_map, NVIDIA_SMMU_VINTF_MAX_SIDS); + u32 sid_replace[NVIDIA_SMMU_VINTF_MAX_SIDS]; + + spinlock_t lock; }; struct nvidia_smmu { @@ -91,6 +125,8 @@ struct nvidia_smmu { struct device *cmdqv_dev; void __iomem *cmdqv_base; + resource_size_t ioaddr; + resource_size_t ioaddr_size; int cmdqv_irq; /* CMDQV Hardware Params */ @@ -98,10 +134,38 @@ struct nvidia_smmu { u16 num_total_vcmdqs; u16 num_vcmdqs_per_vintf; +#define NVIDIA_SMMU_MAX_VINTFS (1 << 6) + DECLARE_BITMAP(vintf_map, NVIDIA_SMMU_MAX_VINTFS); + /* CMDQV_VINTF(0) reserved for host kernel use */ struct nvidia_smmu_vintf vintf0; + + struct nvidia_smmu_vintf **vmid_mappings; + +#ifdef CONFIG_VFIO_MDEV_DEVICE + /* CMDQV_VINTFs exposed to userspace via mdev */ + struct nvidia_cmdqv_mdev **vintf_mdev; + /* Cache for two 64-bit VCMDQ base addresses */ + struct nvidia_cmdqv_vcmdq_regcache { + u64 base_addr; + u64 cons_addr; + } *vcmdq_regcache; + struct mutex mdev_lock; + struct mutex vmid_lock; +#endif }; +#ifdef CONFIG_VFIO_MDEV_DEVICE +struct nvidia_cmdqv_mdev { + struct nvidia_smmu *nsmmu; + struct mdev_device *mdev; + struct nvidia_smmu_vintf *vintf; + + struct notifier_block group_notifier; + struct kvm *kvm; +}; +#endif + static irqreturn_t nvidia_smmu_cmdqv_isr(int irq, void *devid) { struct nvidia_smmu *nsmmu = (struct nvidia_smmu *)devid; @@ -135,6 +199,61 @@ static irqreturn_t nvidia_smmu_cmdqv_isr(int irq, void *devid) return IRQ_HANDLED; } +#ifdef CONFIG_VFIO_MDEV_DEVICE +struct mdev_parent_ops nvidia_smmu_cmdqv_mdev_ops; + +int nvidia_smmu_cmdqv_mdev_init(struct nvidia_smmu *nsmmu) +{ + struct nvidia_cmdqv_mdev *cmdqv_mdev; + int ret; + + /* Skip mdev init unless there are available VINTFs */ + if (nsmmu->num_total_vintfs <= 1) + return 0; + + nsmmu->vintf_mdev = devm_kcalloc(nsmmu->cmdqv_dev, nsmmu->num_total_vintfs, + sizeof(*nsmmu->vintf_mdev), GFP_KERNEL); + if (!nsmmu->vintf_mdev) + return -ENOMEM; + + nsmmu->vcmdq_regcache = devm_kcalloc(nsmmu->cmdqv_dev, nsmmu->num_total_vcmdqs, + sizeof(*nsmmu->vcmdq_regcache), GFP_KERNEL); + if (!nsmmu->vcmdq_regcache) + return -ENOMEM; + + nsmmu->vmid_mappings = devm_kcalloc(nsmmu->cmdqv_dev, 1 << nsmmu->smmu.vmid_bits, + sizeof(*nsmmu->vmid_mappings), GFP_KERNEL); + if (!nsmmu->vmid_mappings) + return -ENOMEM; + + mutex_init(&nsmmu->mdev_lock); + mutex_init(&nsmmu->vmid_lock); + + /* Add a dummy mdev instance to represent vintf0 */ + cmdqv_mdev = devm_kzalloc(nsmmu->cmdqv_dev, sizeof(*cmdqv_mdev), GFP_KERNEL); + if (!cmdqv_mdev) + return -ENOMEM; + + cmdqv_mdev->nsmmu = nsmmu; + nsmmu->vintf_mdev[0] = cmdqv_mdev; + + ret = mdev_register_device(nsmmu->cmdqv_dev, &nvidia_smmu_cmdqv_mdev_ops); + if (ret) { + dev_err(nsmmu->cmdqv_dev, "failed to register mdev device: %d\n", ret); + return ret; + } + + platform_set_drvdata(to_platform_device(nsmmu->cmdqv_dev), nsmmu); + + return ret; +} +#else +int nvidia_smmu_cmdqv_mdev_init(struct nvidia_smmu *nsmmu) +{ + return 0; +} +#endif + /* Adapt struct arm_smmu_cmdq init sequences from arm-smmu-v3.c for VCMDQs */ static int nvidia_smmu_init_one_arm_smmu_cmdq(struct nvidia_smmu *nsmmu, struct arm_smmu_cmdq *cmdq, @@ -255,6 +374,16 @@ static int nvidia_smmu_cmdqv_init(struct nvidia_smmu *nsmmu) qidx, vintf0->idx, qidx); } + /* Log this vintf0 in vintf_map */ + set_bit(0, nsmmu->vintf_map); + + spin_lock_init(&vintf0->lock); + +#ifdef CONFIG_VFIO_MDEV_DEVICE + if (nsmmu->vintf_mdev && nsmmu->vintf_mdev[0]) + nsmmu->vintf_mdev[0]->vintf = vintf0; +#endif + return 0; } @@ -269,6 +398,9 @@ static int nvidia_smmu_probe(struct nvidia_smmu *nsmmu) if (!res) return -ENXIO; + nsmmu->ioaddr = res->start; + nsmmu->ioaddr_size = resource_size(res); + nsmmu->cmdqv_base = devm_ioremap_resource(nsmmu->cmdqv_dev, res); if (IS_ERR(nsmmu->cmdqv_base)) return PTR_ERR(nsmmu->cmdqv_base); @@ -366,9 +498,131 @@ static int nvidia_smmu_device_reset(struct arm_smmu_device *smmu) return 0; } +static int nvidia_smmu_bitmap_alloc(unsigned long *map, int size) +{ + int idx; + + do { + idx = find_first_zero_bit(map, size); + if (idx == size) + return -ENOSPC; + } while (test_and_set_bit(idx, map)); + + return idx; +} + +static void nvidia_smmu_bitmap_free(unsigned long *map, int idx) +{ + clear_bit(idx, map); +} + +static int nvidia_smmu_attach_dev(struct arm_smmu_domain *smmu_domain, struct device *dev) +{ + struct nvidia_smmu *nsmmu = (struct nvidia_smmu *)smmu_domain->smmu; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + struct nvidia_smmu_vintf *vintf = &nsmmu->vintf0; + int i, slot; + +#ifdef CONFIG_VFIO_MDEV_DEVICE + /* Repoint vintf to the corresponding one for Nested Translation mode */ + if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED) { + u16 vmid = smmu_domain->s2_cfg.vmid; + + mutex_lock(&nsmmu->vmid_lock); + vintf = nsmmu->vmid_mappings[vmid]; + mutex_unlock(&nsmmu->vmid_lock); + if (!vintf) { + dev_err(nsmmu->cmdqv_dev, "failed to find vintf\n"); + return -EINVAL; + } + } +#endif + + for (i = 0; i < fwspec->num_ids; i++) { + unsigned int sid = fwspec->ids[i]; + unsigned long flags; + + /* Find an empty slot of SID_MATCH and SID_REPLACE */ + slot = nvidia_smmu_bitmap_alloc(vintf->sid_map, NVIDIA_SMMU_VINTF_MAX_SIDS); + if (slot < 0) + return -EBUSY; + + /* Write PHY_SID to SID_REPLACE and cache it for quick lookup */ + writel_relaxed(sid, vintf->base + NVIDIA_VINTF_SID_REPLACE(slot)); + + spin_lock_irqsave(&vintf->lock, flags); + vintf->sid_replace[slot] = sid; + spin_unlock_irqrestore(&vintf->lock, flags); + + if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED) { + struct iommu_group *group = iommu_group_get(dev); + + /* + * Mark SID_MATCH with iommu_group_id, without setting ENABLE bit + * This allows hypervisor to look up one SID_MATCH register that + * matches with the same iommu_group_id, and to eventually update + * VIRT_SID in SID_MATCH. + */ + writel_relaxed(iommu_group_id(group) << 1, + vintf->base + NVIDIA_VINTF_SID_MATCH(slot)); + } + } + + return 0; +} + +static void nvidia_smmu_detach_dev(struct arm_smmu_domain *smmu_domain, struct device *dev) +{ + struct nvidia_smmu *nsmmu = (struct nvidia_smmu *)smmu_domain->smmu; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + struct nvidia_smmu_vintf *vintf = &nsmmu->vintf0; + int i, slot; + +#ifdef CONFIG_VFIO_MDEV_DEVICE + /* Replace vintf0 with the corresponding one for Nested Translation mode */ + if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED) { + u16 vmid = smmu_domain->s2_cfg.vmid; + + mutex_lock(&nsmmu->vmid_lock); + vintf = nsmmu->vmid_mappings[vmid]; + mutex_unlock(&nsmmu->vmid_lock); + if (!vintf) { + dev_err(nsmmu->cmdqv_dev, "failed to find vintf\n"); + return; + } + } +#endif + + for (i = 0; i < fwspec->num_ids; i++) { + unsigned int sid = fwspec->ids[i]; + unsigned long flags; + + spin_lock_irqsave(&vintf->lock, flags); + + /* Find a SID_REPLACE register matching sid */ + for (slot = 0; slot < ARRAY_SIZE(vintf->sid_replace); slot++) + if (sid == vintf->sid_replace[slot]) + break; + + spin_unlock_irqrestore(&vintf->lock, flags); + + if (slot == ARRAY_SIZE(vintf->sid_replace)) { + dev_dbg(nsmmu->cmdqv_dev, "failed to find vintf\n"); + return; + } + + writel_relaxed(0, vintf->base + NVIDIA_VINTF_SID_REPLACE(slot)); + writel_relaxed(0, vintf->base + NVIDIA_VINTF_SID_MATCH(slot)); + + nvidia_smmu_bitmap_free(vintf->sid_map, slot); + } +} + const struct arm_smmu_impl nvidia_smmu_impl = { .device_reset = nvidia_smmu_device_reset, .get_cmdq = nvidia_smmu_get_cmdq, + .attach_dev = nvidia_smmu_attach_dev, + .detach_dev = nvidia_smmu_detach_dev, }; #ifdef CONFIG_ACPI @@ -426,7 +680,570 @@ struct arm_smmu_device *nvidia_smmu_v3_impl_init(struct arm_smmu_device *smmu) if (ret) return ERR_PTR(ret); + ret = nvidia_smmu_cmdqv_mdev_init(nsmmu); + if (ret) + return ERR_PTR(ret); + nsmmu->smmu.impl = &nvidia_smmu_impl; return &nsmmu->smmu; } + +#ifdef CONFIG_VFIO_MDEV_DEVICE +#define mdev_name(m) dev_name(mdev_dev(m)) + +int nvidia_smmu_cmdqv_mdev_create(struct mdev_device *mdev) +{ + struct device *parent_dev = mdev_parent_dev(mdev); + struct nvidia_smmu *nsmmu = platform_get_drvdata(to_platform_device(parent_dev)); + struct nvidia_cmdqv_mdev *cmdqv_mdev; + struct nvidia_smmu_vintf *vintf; + int vmid, idx, ret; + u32 regval; + + cmdqv_mdev = kzalloc(sizeof(*cmdqv_mdev), GFP_KERNEL); + if (!cmdqv_mdev) + return -ENOMEM; + + cmdqv_mdev->vintf = kzalloc(sizeof(*cmdqv_mdev->vintf), GFP_KERNEL); + if (!cmdqv_mdev->vintf) { + ret = -ENOMEM; + goto free_mdev; + } + + cmdqv_mdev->mdev = mdev; + cmdqv_mdev->nsmmu = nsmmu; + vintf = cmdqv_mdev->vintf; + + mutex_lock(&nsmmu->mdev_lock); + idx = nvidia_smmu_bitmap_alloc(nsmmu->vintf_map, nsmmu->num_total_vintfs); + if (idx < 0) { + dev_err(nsmmu->cmdqv_dev, "failed to allocate vintfs\n"); + mutex_unlock(&nsmmu->mdev_lock); + ret = -EBUSY; + goto free_vintf; + } + nsmmu->vintf_mdev[idx] = cmdqv_mdev; + mutex_unlock(&nsmmu->mdev_lock); + + mutex_lock(&nsmmu->vmid_lock); + vmid = arm_smmu_vmid_alloc(&nsmmu->smmu); + if (vmid < 0) { + dev_err(nsmmu->cmdqv_dev, "failed to allocate vmid\n"); + mutex_unlock(&nsmmu->vmid_lock); + ret = -EBUSY; + goto free_vintf_map; + } + + /* Create mapping between vmid and vintf */ + nsmmu->vmid_mappings[vmid] = vintf; + mutex_unlock(&nsmmu->vmid_lock); + + vintf->idx = idx; + vintf->vmid = vmid; + vintf->base = nsmmu->cmdqv_base + NVIDIA_CMDQV_VINTF(idx); + + spin_lock_init(&vintf->lock); + mdev_set_drvdata(mdev, cmdqv_mdev); + + writel_relaxed(0, vintf->base + NVIDIA_VINTF_CONFIG); + + /* Point to NVIDIA_VINTFi_VCMDQ_BASE */ + vintf->vcmdq_base = nsmmu->cmdqv_base + NVIDIA_VINTFi_VCMDQ_BASE(vintf->idx); + + /* Alloc VCMDQs (2n, 2n+1, 2n+2, ...) to VINTF(idx) as logical-VCMDQ (0, 1, 2, ...) */ + for (idx = 0; idx < nsmmu->num_vcmdqs_per_vintf; idx++) { + u16 vcmdq_idx = nsmmu->num_vcmdqs_per_vintf * vintf->idx + idx; + + regval = FIELD_PREP(CMDQV_CMDQ_ALLOC_VINTF, vintf->idx); + regval |= FIELD_PREP(CMDQV_CMDQ_ALLOC_LVCMDQ, idx); + regval |= CMDQV_CMDQ_ALLOCATED; + writel_relaxed(regval, nsmmu->cmdqv_base + NVIDIA_CMDQV_CMDQ_ALLOC(vcmdq_idx)); + + dev_info(nsmmu->cmdqv_dev, "allocated VCMDQ%u to VINTF%u as logical-VCMDQ%u\n", + vcmdq_idx, vintf->idx, idx); + } + + dev_dbg(nsmmu->cmdqv_dev, "allocated VINTF%u to mdev_device (%s) binding to vmid (%d)\n", + vintf->idx, dev_name(mdev_dev(mdev)), vintf->vmid); + + return 0; + +free_vintf_map: + nvidia_smmu_bitmap_free(nsmmu->vintf_map, idx); +free_vintf: + kfree(cmdqv_mdev->vintf); +free_mdev: + kfree(cmdqv_mdev); + + return ret; +} + +int nvidia_smmu_cmdqv_mdev_remove(struct mdev_device *mdev) +{ + struct nvidia_cmdqv_mdev *cmdqv_mdev = mdev_get_drvdata(mdev); + struct nvidia_smmu_vintf *vintf = cmdqv_mdev->vintf; + struct nvidia_smmu *nsmmu = cmdqv_mdev->nsmmu; + u16 idx; + + /* Deallocate VCMDQs of the VINTF(idx) */ + for (idx = 0; idx < nsmmu->num_vcmdqs_per_vintf; idx++) { + u16 vcmdq_idx = nsmmu->num_vcmdqs_per_vintf * vintf->idx + idx; + + writel_relaxed(0, nsmmu->cmdqv_base + NVIDIA_CMDQV_CMDQ_ALLOC(vcmdq_idx)); + + dev_info(nsmmu->cmdqv_dev, "deallocated VCMDQ%u to VINTF%u\n", + vcmdq_idx, vintf->idx); + } + + /* Disable and cleanup VINTF configurations */ + writel_relaxed(0, vintf->base + NVIDIA_VINTF_CONFIG); + + mutex_lock(&nsmmu->mdev_lock); + nvidia_smmu_bitmap_free(nsmmu->vintf_map, vintf->idx); + nsmmu->vintf_mdev[vintf->idx] = NULL; + mutex_unlock(&nsmmu->mdev_lock); + + mutex_lock(&nsmmu->vmid_lock); + arm_smmu_vmid_free(&nsmmu->smmu, vintf->vmid); + nsmmu->vmid_mappings[vintf->vmid] = NULL; + mutex_unlock(&nsmmu->vmid_lock); + + mdev_set_drvdata(mdev, NULL); + kfree(cmdqv_mdev->vintf); + kfree(cmdqv_mdev); + + return 0; +} + +static int nvidia_smmu_cmdqv_mdev_group_notifier(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct nvidia_cmdqv_mdev *cmdqv_mdev = + container_of(nb, struct nvidia_cmdqv_mdev, group_notifier); + + if (action == VFIO_GROUP_NOTIFY_SET_KVM) + cmdqv_mdev->kvm = data; + + return NOTIFY_OK; +} + +int nvidia_smmu_cmdqv_mdev_open(struct mdev_device *mdev) +{ + struct nvidia_cmdqv_mdev *cmdqv_mdev = mdev_get_drvdata(mdev); + unsigned long events = VFIO_GROUP_NOTIFY_SET_KVM; + struct device *dev = mdev_dev(mdev); + int ret; + + cmdqv_mdev->group_notifier.notifier_call = nvidia_smmu_cmdqv_mdev_group_notifier; + + ret = vfio_register_notifier(dev, VFIO_GROUP_NOTIFY, &events, &cmdqv_mdev->group_notifier); + if (ret) + dev_err(mdev_dev(mdev), "failed to register group notifier: %d\n", ret); + + return ret; +} + +void nvidia_smmu_cmdqv_mdev_release(struct mdev_device *mdev) +{ + struct nvidia_cmdqv_mdev *cmdqv_mdev = mdev_get_drvdata(mdev); + struct device *dev = mdev_dev(mdev); + + vfio_unregister_notifier(dev, VFIO_GROUP_NOTIFY, &cmdqv_mdev->group_notifier); +} + +ssize_t nvidia_smmu_cmdqv_mdev_read(struct mdev_device *mdev, char __user *buf, + size_t count, loff_t *ppos) +{ + struct nvidia_cmdqv_mdev *cmdqv_mdev = mdev_get_drvdata(mdev); + struct nvidia_smmu_vintf *vintf = cmdqv_mdev->vintf; + struct nvidia_smmu *nsmmu = cmdqv_mdev->nsmmu; + struct device *dev = mdev_dev(mdev); + loff_t reg_offset = *ppos, reg; + u64 regval = 0; + u16 idx, slot; + + /* Only support aligned 32/64-bit accesses */ + if (!count || (count % 4) || count > 8 || (reg_offset % count)) + return -EINVAL; + + switch (reg_offset) { + case NVIDIA_CMDQV_CONFIG: + regval = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_CONFIG); + break; + case NVIDIA_CMDQV_STATUS: + regval = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_STATUS); + break; + case NVIDIA_CMDQV_PARAM: + /* + * Guest shall import only one of the VINTFs using mdev interface, + * so limit the numbers of VINTF and VCMDQs in the PARAM register. + */ + regval = readl_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_PARAM); + regval &= ~(CMDQV_NUM_VINTF_LOG2 | CMDQV_NUM_VCMDQ_LOG2); + regval |= FIELD_PREP(CMDQV_NUM_VINTF_LOG2, 0); + regval |= FIELD_PREP(CMDQV_NUM_VCMDQ_LOG2, ilog2(nsmmu->num_vcmdqs_per_vintf)); + break; + case NVIDIA_CMDQV_VINTF_ERR_MAP: + /* Translate the value to bit 0 as guest can only see vintf0 */ + regval = readl_relaxed(vintf->base + NVIDIA_VINTF_STATUS); + regval = !!FIELD_GET(VINTF_STATUS, regval); + break; + case NVIDIA_CMDQV_VINTF_INT_MASK: + /* Translate the value to bit 0 as guest can only see vintf0 */ + regval = readq_relaxed(nsmmu->cmdqv_base + NVIDIA_CMDQV_VINTF_INT_MASK); + regval = !!(regval & BIT(vintf->idx)); + break; + case NVIDIA_CMDQV_VCMDQ_ERR_MAP: + regval = readq_relaxed(vintf->base + NVIDIA_VINTF_CMDQ_ERR_MAP); + break; + case NVIDIA_CMDQV_CMDQ_ALLOC(0) ... NVIDIA_CMDQV_CMDQ_ALLOC(128): + if (idx >= nsmmu->num_vcmdqs_per_vintf) { + /* Guest only has limited number of VMCDQs for one VINTF */ + regval = 0; + } else { + /* We have allocated VCMDQs, so just report it constantly */ + idx = (reg_offset - NVIDIA_CMDQV_CMDQ_ALLOC(0)) / 4; + regval = FIELD_PREP(CMDQV_CMDQ_ALLOC_LVCMDQ, idx) | CMDQV_CMDQ_ALLOCATED; + } + break; + case NVIDIA_VINTFi_CONFIG(0): + regval = readl_relaxed(vintf->base + NVIDIA_VINTF_CONFIG); + /* Guest should not see the VMID field */ + regval &= ~(VINTF_VMID); + break; + case NVIDIA_VINTFi_STATUS(0): + regval = readl_relaxed(vintf->base + NVIDIA_VINTF_STATUS); + break; + case NVIDIA_VINTFi_SID_MATCH(0, 0) ... NVIDIA_VINTFi_SID_MATCH(0, 15): + slot = (reg_offset - NVIDIA_VINTFi_SID_MATCH(0, 0)) / 0x4; + regval = readl_relaxed(vintf->base + NVIDIA_VINTF_SID_MATCH(slot)); + break; + case NVIDIA_VINTFi_SID_REPLACE(0, 0) ... NVIDIA_VINTFi_SID_REPLACE(0, 15): + /* Guest should not see the PHY_SID but know whether it is set or not */ + slot = (reg_offset - NVIDIA_VINTFi_SID_REPLACE(0, 0)) / 0x4; + regval = !!readl_relaxed(vintf->base + NVIDIA_VINTF_SID_REPLACE(slot)); + break; + case NVIDIA_VINTFi_CMDQ_ERR_MAP(0): + regval = readl_relaxed(vintf->base + NVIDIA_VINTF_CMDQ_ERR_MAP); + break; + case NVIDIA_CMDQV_VCMDQ(0) ... NVIDIA_CMDQV_VCMDQ(128): + /* We allow fallback reading of VCMDQ PAGE0 upon a warning */ + dev_warn(dev, "read access at 0x%llx should go through mmap instead!", reg_offset); + + /* Adjust reg_offset since we're reading base on VINTF logical-VCMDQ space */ + regval = readl_relaxed(vintf->vcmdq_base + reg_offset - NVIDIA_CMDQV_VCMDQ(0)); + break; + case NVIDIA_VCMDQ_BASE_L(0) ... NVIDIA_VCMDQ_BASE_L(128): + /* Decipher idx and reg of VCMDQ */ + idx = (reg_offset - NVIDIA_VCMDQ_BASE_L(0)) / 0x80; + reg = reg_offset - NVIDIA_VCMDQ_BASE_L(idx); + + switch (reg) { + case NVIDIA_VCMDQ0_BASE_L: + regval = nsmmu->vcmdq_regcache[idx].base_addr; + if (count == 4) + regval = lower_32_bits(regval); + break; + case NVIDIA_VCMDQ0_BASE_H: + regval = upper_32_bits(nsmmu->vcmdq_regcache[idx].base_addr); + break; + case NVIDIA_VCMDQ0_CONS_INDX_BASE_L: + regval = nsmmu->vcmdq_regcache[idx].cons_addr; + if (count == 4) + regval = lower_32_bits(regval); + break; + case NVIDIA_VCMDQ0_CONS_INDX_BASE_H: + regval = upper_32_bits(nsmmu->vcmdq_regcache[idx].cons_addr); + break; + default: + dev_err(dev, "unknown base address read access at 0x%llX\n", reg_offset); + break; + } + break; + default: + dev_err(dev, "unhandled read access at 0x%llX\n", reg_offset); + return -EINVAL; + } + + if (copy_to_user(buf, ®val, count)) + return -EFAULT; + *ppos += count; + + return count; +} + +static u64 nvidia_smmu_cmdqv_mdev_gpa_to_pa(struct nvidia_cmdqv_mdev *cmdqv_mdev, u64 gpa) +{ + u64 gfn, hfn, hva, hpa, pg_offset; + struct page *pg; + long num_pages; + + gfn = gpa_to_gfn(gpa); + pg_offset = gpa ^ gfn_to_gpa(gfn); + + hva = gfn_to_hva(cmdqv_mdev->kvm, gfn); + if (kvm_is_error_hva(hva)) + return 0; + + num_pages = get_user_pages(hva, 1, FOLL_GET | FOLL_WRITE, &pg, NULL); + if (num_pages < 1) + return 0; + + hfn = page_to_pfn(pg); + hpa = pfn_to_hpa(hfn); + + return hpa | pg_offset; +} + +ssize_t nvidia_smmu_cmdqv_mdev_write(struct mdev_device *mdev, const char __user *buf, + size_t count, loff_t *ppos) +{ + struct nvidia_cmdqv_mdev *cmdqv_mdev = mdev_get_drvdata(mdev); + struct nvidia_smmu_vintf *vintf = cmdqv_mdev->vintf; + struct nvidia_smmu *nsmmu = cmdqv_mdev->nsmmu; + struct device *dev = mdev_dev(mdev); + loff_t reg_offset = *ppos, reg; + u64 mask = U32_MAX; + u64 regval = 0x0; + u16 idx, slot; + + /* Only support aligned 32/64-bit accesses */ + if (!count || (count % 4) || count > 8 || (reg_offset % count)) + return -EINVAL; + + /* Get the value to be written to the register at reg_offset */ + if (copy_from_user(®val, buf, count)) + return -EFAULT; + + switch (reg_offset) { + case NVIDIA_VINTFi_CONFIG(0): + regval &= ~(VINTF_VMID); + regval |= FIELD_PREP(VINTF_VMID, vintf->vmid); + writel_relaxed(regval, vintf->base + NVIDIA_VINTF_CONFIG); + break; + case NVIDIA_CMDQV_CMDQ_ALLOC(0) ... NVIDIA_CMDQV_CMDQ_ALLOC(128): + /* Ignore since VCMDQs were already allocated to the VINTF */ + break; + case NVIDIA_VINTFi_SID_MATCH(0, 0) ... NVIDIA_VINTFi_SID_MATCH(0, 15): + slot = (reg_offset - NVIDIA_VINTFi_SID_MATCH(0, 0)) / 0x4; + writel_relaxed(regval, vintf->base + NVIDIA_VINTF_SID_MATCH(slot)); + break; + case NVIDIA_VINTFi_SID_REPLACE(0, 0) ... NVIDIA_VINTFi_SID_REPLACE(0, 15): + /* Guest should not alter the value */ + break; + case NVIDIA_CMDQV_VCMDQ(0) ... NVIDIA_CMDQV_VCMDQ(128): + /* We allow fallback writing at VCMDQ PAGE0 upon a warning */ + dev_warn(dev, "write access at 0x%llx should go through mmap instead!", reg_offset); + + /* Adjust reg_offset since we're reading base on VINTF logical-VCMDQ space */ + writel_relaxed(regval, vintf->vcmdq_base + reg_offset - NVIDIA_CMDQV_VCMDQ(0)); + break; + case NVIDIA_VCMDQ_BASE_L(0) ... NVIDIA_VCMDQ_BASE_L(128): + /* Decipher idx and reg of VCMDQ */ + idx = (reg_offset - NVIDIA_VCMDQ_BASE_L(0)) / 0x80; + reg = reg_offset - NVIDIA_VCMDQ_BASE_L(idx); + + switch (reg) { + case NVIDIA_VCMDQ0_BASE_L: + if (count == 8) + mask = U64_MAX; + regval &= mask; + nsmmu->vcmdq_regcache[idx].base_addr &= ~mask; + nsmmu->vcmdq_regcache[idx].base_addr |= regval; + regval = nsmmu->vcmdq_regcache[idx].base_addr; + break; + case NVIDIA_VCMDQ0_BASE_H: + nsmmu->vcmdq_regcache[idx].base_addr &= U32_MAX; + nsmmu->vcmdq_regcache[idx].base_addr |= regval << 32; + regval = nsmmu->vcmdq_regcache[idx].base_addr; + break; + case NVIDIA_VCMDQ0_CONS_INDX_BASE_L: + if (count == 8) + mask = U64_MAX; + regval &= mask; + nsmmu->vcmdq_regcache[idx].cons_addr &= ~mask; + nsmmu->vcmdq_regcache[idx].cons_addr |= regval; + regval = nsmmu->vcmdq_regcache[idx].cons_addr; + break; + case NVIDIA_VCMDQ0_CONS_INDX_BASE_H: + nsmmu->vcmdq_regcache[idx].cons_addr &= U32_MAX; + nsmmu->vcmdq_regcache[idx].cons_addr |= regval << 32; + regval = nsmmu->vcmdq_regcache[idx].cons_addr; + break; + default: + dev_err(dev, "unknown base address write access at 0x%llX\n", reg_offset); + return -EFAULT; + } + + /* Translate guest PA to host PA before writing to the address register */ + regval = nvidia_smmu_cmdqv_mdev_gpa_to_pa(cmdqv_mdev, regval); + + /* Do not fail mdev write as higher/lower addresses can be written separately */ + if (!regval) + dev_dbg(dev, "failed to convert guest address for VCMDQ%d\n", idx); + + /* Adjust reg_offset since we're accessing it via the VINTF CMDQ aperture */ + reg_offset -= NVIDIA_CMDQV_VCMDQ(0); + if (count == 8) + writeq_relaxed(regval, vintf->vcmdq_base + reg_offset); + else + writel_relaxed(regval, vintf->vcmdq_base + reg_offset); + break; + + default: + dev_err(dev, "unhandled write access at 0x%llX\n", reg_offset); + return -EINVAL; + } + + *ppos += count; + return count; +} + +long nvidia_smmu_cmdqv_mdev_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg) +{ + struct nvidia_cmdqv_mdev *cmdqv_mdev = mdev_get_drvdata(mdev); + struct nvidia_smmu_vintf *vintf = cmdqv_mdev->vintf; + struct device *dev = mdev_dev(mdev); + struct vfio_device_info device_info; + struct vfio_region_info region_info; + unsigned long minsz; + + switch (cmd) { + case VFIO_DEVICE_GET_INFO: + minsz = offsetofend(struct vfio_device_info, num_irqs); + + if (copy_from_user(&device_info, (void __user *)arg, minsz)) + return -EFAULT; + + if (device_info.argsz < minsz) + return -EINVAL; + + device_info.flags = 0; + device_info.num_irqs = 0; + /* MMIO Regions: [0] - CMDQV_CONFIG, [1] - VCMDQ_PAGE0, [2] - VCMDQ_PAGE1 */ + device_info.num_regions = 3; + + return copy_to_user((void __user *)arg, &device_info, minsz) ? -EFAULT : 0; + case VFIO_DEVICE_GET_REGION_INFO: + minsz = offsetofend(struct vfio_region_info, offset); + + if (copy_from_user(®ion_info, (void __user *)arg, minsz)) + return -EFAULT; + + if (region_info.argsz < minsz) + return -EINVAL; + + if (region_info.index >= 3) + return -EINVAL; + + /* MMIO Regions: [0] - CMDQV_CONFIG, [1] - VCMDQ_PAGE0, [2] - VCMDQ_PAGE1 */ + region_info.size = SZ_64K; + region_info.offset = region_info.index * SZ_64K; + region_info.flags = VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE; + /* In case of VCMDQ_PAGE0, add FLAG_MMAP */ + if (region_info.index == 1) + region_info.flags |= VFIO_REGION_INFO_FLAG_MMAP; + + return copy_to_user((void __user *)arg, ®ion_info, minsz) ? -EFAULT : 0; + case VFIO_IOMMU_GET_VMID: + return copy_to_user((void __user *)arg, &vintf->vmid, sizeof(u16)) ? -EFAULT : 0; + default: + dev_err(dev, "unhandled ioctl cmd 0x%X\n", cmd); + return -ENOTTY; + } + + return 0; +} + +int nvidia_smmu_cmdqv_mdev_mmap(struct mdev_device *mdev, struct vm_area_struct *vma) +{ + struct nvidia_cmdqv_mdev *cmdqv_mdev = mdev_get_drvdata(mdev); + struct nvidia_smmu_vintf *vintf = cmdqv_mdev->vintf; + struct nvidia_smmu *nsmmu = cmdqv_mdev->nsmmu; + struct device *dev = mdev_dev(mdev); + unsigned int region_idx; + unsigned long size; + + /* Make sure that only VCMDQ_PAGE0 MMIO region can be mmapped */ + region_idx = (vma->vm_pgoff << PAGE_SHIFT) / SZ_64K; + if (region_idx != 0x1) { + dev_err(dev, "mmap unsupported for region_idx %d", region_idx); + return -EINVAL; + } + + size = vma->vm_end - vma->vm_start; + if (size > SZ_64K) + return -EINVAL; + + /* Fixup the VMA */ + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + + /* Map PAGE0 of VINTF[idx] */ + vma->vm_pgoff = nsmmu->ioaddr + NVIDIA_VINTFi_VCMDQ_BASE(vintf->idx); + vma->vm_pgoff >>= PAGE_SHIFT; + + return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, size, vma->vm_page_prot); +} + +static ssize_t name_show(struct mdev_type *mtype, + struct mdev_type_attribute *attr, char *buf) +{ + return sprintf(buf, "%s\n", "NVIDIA_SMMU_CMDQV_VINTF - (2 VCMDQs/VINTF)"); +} +static MDEV_TYPE_ATTR_RO(name); + +static ssize_t available_instances_show(struct mdev_type *mtype, + struct mdev_type_attribute *attr, char *buf) +{ + struct device *parent_dev = mtype_get_parent_dev(mtype); + struct nvidia_smmu *nsmmu = platform_get_drvdata(to_platform_device(parent_dev)); + u16 idx, cnt = 0; + + mutex_lock(&nsmmu->mdev_lock); + for (idx = 0; idx < nsmmu->num_total_vintfs; idx++) + cnt += !nsmmu->vintf_mdev[idx]; + mutex_unlock(&nsmmu->mdev_lock); + + return sprintf(buf, "%d\n", cnt); +} +static MDEV_TYPE_ATTR_RO(available_instances); + +static ssize_t device_api_show(struct mdev_type *mtype, + struct mdev_type_attribute *attr, char *buf) +{ + return sprintf(buf, "%s\n", VFIO_DEVICE_API_PLATFORM_STRING); +} +static MDEV_TYPE_ATTR_RO(device_api); + +static struct attribute *mdev_types_attrs[] = { + &mdev_type_attr_name.attr, + &mdev_type_attr_device_api.attr, + &mdev_type_attr_available_instances.attr, + NULL, +}; + +static struct attribute_group mdev_type_group1 = { + .name = "nvidia_cmdqv_vintf", + .attrs = mdev_types_attrs, +}; + +static struct attribute_group *mdev_type_groups[] = { + &mdev_type_group1, + NULL, +}; + +struct mdev_parent_ops nvidia_smmu_cmdqv_mdev_ops = { + .owner = THIS_MODULE, + .supported_type_groups = mdev_type_groups, + .create = nvidia_smmu_cmdqv_mdev_create, + .remove = nvidia_smmu_cmdqv_mdev_remove, + .open = nvidia_smmu_cmdqv_mdev_open, + .release = nvidia_smmu_cmdqv_mdev_release, + .read = nvidia_smmu_cmdqv_mdev_read, + .write = nvidia_smmu_cmdqv_mdev_write, + .ioctl = nvidia_smmu_cmdqv_mdev_ioctl, + .mmap = nvidia_smmu_cmdqv_mdev_mmap, +}; + +#endif /* CONFIG_VFIO_MDEV_DEVICE */