From patchwork Tue Jun 7 00:34:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 12871213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C97ABCCA484 for ; Tue, 7 Jun 2022 00:34:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E148B112693; Tue, 7 Jun 2022 00:34:42 +0000 (UTC) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2052.outbound.protection.outlook.com [40.107.96.52]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6F1ED10F732; Tue, 7 Jun 2022 00:34:41 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OevnFWTW02QcuOHWnZscN2e/XqaHSlO7L0yzYiU6lD2ijyv4LZb+K5KzCGIXDrEfEc4kyPB00NnjMefxB93+t3VzqMVWOvTixSF2Cx7HRRysZER+r41oOp526BZ3wa/PPDq3Qnr3JcwWDAgz6X3RjlvAJOVWumoyI8JYVhFkXyvkt9jImB0nM2xJpVv76RIQJpXHg3LxNxlhLyTwk3YxcA3Wa2qg51J3m9EBoBmwo2mUVnxi7hvHocbjbAESa5w/Z6GVUqW4OpOgK7OWnBIeTp+HeH4EoC1XuFvery4TiHYdLesvX3kpQKKRbmteVjAkQYcCZB5dgXBw/vzaJCy57g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1r4xM82hdLW9M2ATdSnmvYTdGCtAu13ASBqBNav46GE=; b=LPfa1u1JoP+JjQqw7PQGl6fXS8IRv1G/sTqU7paWXThRg4YFpq//OyvUe31vPnD43N7hAKqg8cjAaISpbMgOBC4rCfD9sf0ZUw5HrpfyPvDRG9LCWD9smq1XfGHEbTXxrFZNCnophdAeN0hr9Mi37GkizKaLsmkSKIxoAQLev36c/8/kT3zRRT9M3yLpd3yeJKv8oNYFKP/ryvciclOZCIvnLwM77Q/4nfZ/KwdfrixDGKUwk6xeMsbu7IC0YttuCJbSVmO3+7FruRw4WIEq4nN738FBahjXZ0uoZPogFD5De77xpLmHqGpkad4u0yj7MJCa2NNRhnAyreGmEiK91g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1r4xM82hdLW9M2ATdSnmvYTdGCtAu13ASBqBNav46GE=; b=W48VrIa0x2p71toVgSkF7NBAHGwIF8zzWAvEa5SebMmoWx+lTtflGcEkLCtzXIwjfSYRHrMcqTe+hVwDro1yP1JVzU5bCqPs+MAoFVAejRjsq7xeJDgNQ2kH96sf1QKHJoNFnvCOtojqI5u1e0rOwQQKAyie4TuM4PzO8RA685Ji69jy3qL+f/n5V6h5s92CciaypKBN+rKh9aPYEElUZSajLD9N8P9MItGKn3/Llf+UbILwLIOP7sAbumA1L7+SqDdAhHO6CZt+1IR294PAfGS+O+hfDnUaqs9xifb1WL9NIW04JQRJYKm6ewcqFQRUmZQxhLmSK3RfQuUWN5x/1w== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from MN2PR12MB4192.namprd12.prod.outlook.com (2603:10b6:208:1d5::15) by MN2PR12MB3758.namprd12.prod.outlook.com (2603:10b6:208:169::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Tue, 7 Jun 2022 00:34:39 +0000 Received: from MN2PR12MB4192.namprd12.prod.outlook.com ([fe80::2484:51da:d56f:f1a5]) by MN2PR12MB4192.namprd12.prod.outlook.com ([fe80::2484:51da:d56f:f1a5%9]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022 00:34:39 +0000 From: Jason Gunthorpe To: Alexander Gordeev , David Airlie , Tony Krowiak , Alex Williamson , Christian Borntraeger , Cornelia Huck , Daniel Vetter , dri-devel@lists.freedesktop.org, Eric Farman , Harald Freudenberger , Vasily Gorbik , Heiko Carstens , intel-gfx@lists.freedesktop.org, intel-gvt-dev@lists.freedesktop.org, Jani Nikula , Jason Herne , Joonas Lahtinen , kvm@vger.kernel.org, linux-s390@vger.kernel.org, Matthew Rosato , Peter Oberparleiter , Halil Pasic , Rodrigo Vivi , Sven Schnelle , Tvrtko Ursulin , Vineeth Vijayan , Zhenyu Wang , Zhi Wang Date: Mon, 6 Jun 2022 21:34:35 -0300 Message-Id: <1-v1-896844109f36+a-vfio_unmap_notif_jgg@nvidia.com> In-Reply-To: <0-v1-896844109f36+a-vfio_unmap_notif_jgg@nvidia.com> References: X-ClientProxiedBy: BL1PR13CA0354.namprd13.prod.outlook.com (2603:10b6:208:2c6::29) To MN2PR12MB4192.namprd12.prod.outlook.com (2603:10b6:208:1d5::15) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7689a46e-976f-4972-0149-08da481d80ca X-MS-TrafficTypeDiagnostic: MN2PR12MB3758:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: qKwj/kg5okkl8RsGyn0rbAUncpFtzSH/k9cRAVkCHqmN7tFBzvj/kHcWx7hB0bJBRGKwCP6LmF+1Ug6obfK+y2/eYOnpmGgdzqRP13WT5vBPzzB/q860OB6/Da8lvck5yc6A78A47Kbd08SMkj+ji+XAk7OnhfYB2w86dqJKID/wOAFYruaEOOLV8l8CIS+frh+LBy63UHpRjvrvmGWAdkH875oK9eTyOq4ZmK3Zfr3HxN3Mmr3HFaIAlva4dzQO9HcJoNG2HlVI+6D+jv7FLY4XHZHFzrOR8ETkzPo16lCknpcLlLjE8sDkcYlBwjrddaAKmQLmx7IlMyQRSExpkddXbtuM4z6n3SgRCr14s+Z3c4oYoci2B7nrV4w5xaUvTKA5+WX7/2B0hOQ+ZOzrAh6i7jot5whLy6NafapmbsXAOQ0KNLPiFG5VBy1QulVHQUW2RvV9HAcbKdUhmho1j3jEOF6OcVJhATIIY4YZOY/HS56aj3iwWxSqy2B5sv4qCVWxy1CWsrqfvav8Lt/cLaEGcX5myztuSOezfdT18odF4P65G5xrJhKZ8y5bTs3Hdpkemztx1POkA/F80KHu/IwHe7zW1F5v7LWYUZmMjQWaJUtRQzkBhQYYcePKNjnUDVgyAfs7llJlGQGN95fFOGUJLDqc7gKzGz38vBu+Swna8KSYKAiqGDnEshhUZRo6rnyA59BYFxdgYFmU/8cN2w== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MN2PR12MB4192.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(6506007)(66476007)(38100700002)(66946007)(86362001)(6666004)(6486002)(4326008)(8676002)(7416002)(2616005)(186003)(8936002)(36756003)(2906002)(5660300002)(66556008)(508600001)(83380400001)(6512007)(110136005)(316002)(26005)(921005)(30864003)(4216001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: kQ84fUnVAJwWevtOfs6cdlqYGNVBYzhqGLrQbpHD04c0X6FaERpiX1BX5CXaaDI7Ous3Hk8fvMhKtXdgkX7cvqnmTkf4SnHnNRMJjtaBLx8J84qGHxccU5AXs09V+IxX+NV2QqBZbZBrIsyiMqEoscwU6dq/MUyu4Zu9Hkc7WxHTZCiyFEnVBuGZhDssC+iu8UpPu/oRJ8o5EDzPkngBSA6oLe5rw/hNQzgJQZ5WfQF65x6SGAwycl/wn22ZQh0YkgxByMZiVk9+t2A6InhnyGiC0ykBSC9Vi6W120BQS6SYGjm0QBA+uvGHGcdg0Ap4YHwQ517ia0DFwb73G/RQ5OHHikgZ6qYSLJQR/auB3N5T2Y7eSMeseJPgXsclbyB15TqdXWfYTQR/M366l6CInwFibP2cqvJWLOTe+xXNBQLK7Se3yWYEtzAwBmyG3AJvjJCrO+oIL6g16h4xt+HM8IbBtzS6kwmw2y4f4AnVU9ucNQO6l2LgA/lAZQmipYMbtzJd14jbezK0tbRyeQx5PbiyC8j/sQgVjVX3d/QrVMYwhNjJAyQW1NbMojy+bZqaft+ZXUtQtP3d8NHXimm/b84sKLIMZnFPeoOSrzKDingNG4BhSOCREavHqyJXvc0Ru7fiuoDZ/LyTFlBzlNs6ZC8AJWzPYoGAAKVtJQMTPgqrz3Oird49BPhG/T6pBbjejpEeY3ktzRn1F6aqY6cFQG5obHT/1z0Nm4ZM9kFVhdOZ00XMYwTYQq0ScLHsaBSVf5hsIsUmijYbLRsd/hZs3bgaQt9WEtD9w9dJB6ye+oS5uxuZEcomwb/kPb9xy6A6WZorQZLEcOX2Mj5svpHDTSWeJ48CzF3e4gdZGfq3wBObKDnwnDw9NZgJXOA1tsugN7jNzxvZoQdjSWb6YF+yqbnhppTUNCDhGL806nS/JiWE3kJ6PrJhNtedBkpBgjrirU/ZZhROkIYxdfrO+p4I8cZvZl7YWkYBNDLJTyAsnXC4+Ft6hQWyxwtwS8F4wL6SrXKmbj98FBoY7og7EP9pKafM6aehaG/OYXjiuCLqoONwa1brTE3m+zJGkpS/b35C9cYegj2fUFABHTj5tu9KK7C2mJRXoqfWUw9SrSwaMeNzmOcXMDs8BB0hYA3ap6lgXmIqW3mRiaXVUL7mH8C1Dvzn1W9ysHQt2eh36k1E1qFkysxkOb8BsejGXDvXs4pbmbzJKoQyW39q0Uqcd1+7FncrUk5QSCTEuzbl58WoiZQO3kMfNoLUfsPSbEFQOfOhyDd0NeRi8B6RRMz/SU9DzhI5mALzrraYLa7ld9z9EPgTh4d4rbU6EcGCgDY05zLGtaGYn+O0vYopIZVQKD/tyzTCW31tNIjBELRAGXPCVG5pLsU+PpWXRCKgwrUm8L3D6DcB9576zFFnsyNND3fDdai/W7b5j45/gJkXNowtlbcteB0k6WuhpspUReJTDGi4v9kiof2SGoV6fF/wqNFj7jgEoWVdE8fIH2ZTjegGwRgHRIA11YN8N1oZJDuEMcBplRdIN6DOe+rR6RFOXZCMMp+Ie2BzB+OK7TysA2NmS37JS9vywNGQueG4G0KcZ90oDGa1CSck0mpL7TSD7IcDYrA5YCDSvz+mtg/nnD6azbOHxECf35MNBSjVNBvA3xmQRYTahn75oCZENQUX1aPQHmZxKQca8XnHZjKDK+ZKlH2WUM4+YIb8XI0qdV8/Km8yOm86MvlhzaI+CYfqY12HSQ== X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7689a46e-976f-4972-0149-08da481d80ca X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB4192.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 00:34:38.1132 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 6BVduo3BygVpWUKzvSb3xi2+BR/oRuDjncWvc7a0IpP1VLlUD8Fyjl/N1+XrrnjI X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3758 Subject: [Intel-gfx] [PATCH 1/2] vfio: Replace the DMA unmapping notifier with a callback X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Christoph Hellwig Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Instead of having drivers register the notifier with explicit code just have them provide a dma_unmap callback op in their driver ops and rely on the core code to wire it up. Suggested-by: Christoph Hellwig Signed-off-by: Jason Gunthorpe Reviewed-by: Christoph Hellwig --- drivers/gpu/drm/i915/gvt/gvt.h | 1 - drivers/gpu/drm/i915/gvt/kvmgt.c | 75 ++++----------- drivers/s390/cio/vfio_ccw_ops.c | 41 ++------- drivers/s390/cio/vfio_ccw_private.h | 1 - drivers/s390/crypto/vfio_ap_ops.c | 54 +++-------- drivers/s390/crypto/vfio_ap_private.h | 3 - drivers/vfio/vfio.c | 126 +++++++++----------------- drivers/vfio/vfio.h | 5 + include/linux/vfio.h | 21 +---- 9 files changed, 92 insertions(+), 235 deletions(-) diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h index aee1a45da74bcb..705689e6401197 100644 --- a/drivers/gpu/drm/i915/gvt/gvt.h +++ b/drivers/gpu/drm/i915/gvt/gvt.h @@ -226,7 +226,6 @@ struct intel_vgpu { unsigned long nr_cache_entries; struct mutex cache_lock; - struct notifier_block iommu_notifier; atomic_t released; struct kvm_page_track_notifier_node track_node; diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c index e2f6c56ab3420c..4000659ad715d4 100644 --- a/drivers/gpu/drm/i915/gvt/kvmgt.c +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c @@ -729,34 +729,27 @@ int intel_gvt_set_edid(struct intel_vgpu *vgpu, int port_num) return ret; } -static int intel_vgpu_iommu_notifier(struct notifier_block *nb, - unsigned long action, void *data) +static void intel_vgpu_dma_unmap(struct vfio_device *vfio_dev, u64 iova, + u64 length) { - struct intel_vgpu *vgpu = - container_of(nb, struct intel_vgpu, iommu_notifier); + struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev); + struct gvt_dma *entry; + u64 iov_pfn, end_iov_pfn; - if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) { - struct vfio_iommu_type1_dma_unmap *unmap = data; - struct gvt_dma *entry; - unsigned long iov_pfn, end_iov_pfn; + iov_pfn = iova >> PAGE_SHIFT; + end_iov_pfn = iov_pfn + length / PAGE_SIZE; - iov_pfn = unmap->iova >> PAGE_SHIFT; - end_iov_pfn = iov_pfn + unmap->size / PAGE_SIZE; + mutex_lock(&vgpu->cache_lock); + for (; iov_pfn < end_iov_pfn; iov_pfn++) { + entry = __gvt_cache_find_gfn(vgpu, iov_pfn); + if (!entry) + continue; - mutex_lock(&vgpu->cache_lock); - for (; iov_pfn < end_iov_pfn; iov_pfn++) { - entry = __gvt_cache_find_gfn(vgpu, iov_pfn); - if (!entry) - continue; - - gvt_dma_unmap_page(vgpu, entry->gfn, entry->dma_addr, - entry->size); - __gvt_cache_remove_entry(vgpu, entry); - } - mutex_unlock(&vgpu->cache_lock); + gvt_dma_unmap_page(vgpu, entry->gfn, entry->dma_addr, + entry->size); + __gvt_cache_remove_entry(vgpu, entry); } - - return NOTIFY_OK; + mutex_unlock(&vgpu->cache_lock); } static bool __kvmgt_vgpu_exist(struct intel_vgpu *vgpu) @@ -783,36 +776,20 @@ static bool __kvmgt_vgpu_exist(struct intel_vgpu *vgpu) static int intel_vgpu_open_device(struct vfio_device *vfio_dev) { struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev); - unsigned long events; - int ret; - - vgpu->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier; - events = VFIO_IOMMU_NOTIFY_DMA_UNMAP; - ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events, - &vgpu->iommu_notifier); - if (ret != 0) { - gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n", - ret); - goto out; - } - - ret = -EEXIST; if (vgpu->attached) - goto undo_iommu; + return -EEXIST; - ret = -ESRCH; if (!vgpu->vfio_device.kvm || vgpu->vfio_device.kvm->mm != current->mm) { gvt_vgpu_err("KVM is required to use Intel vGPU\n"); - goto undo_iommu; + return -ESRCH; } kvm_get_kvm(vgpu->vfio_device.kvm); - ret = -EEXIST; if (__kvmgt_vgpu_exist(vgpu)) - goto undo_iommu; + return -EEXIST; vgpu->attached = true; @@ -831,12 +808,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev) atomic_set(&vgpu->released, 0); return 0; - -undo_iommu: - vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, - &vgpu->iommu_notifier); -out: - return ret; } static void intel_vgpu_release_msi_eventfd_ctx(struct intel_vgpu *vgpu) @@ -853,8 +824,6 @@ static void intel_vgpu_release_msi_eventfd_ctx(struct intel_vgpu *vgpu) static void intel_vgpu_close_device(struct vfio_device *vfio_dev) { struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev); - struct drm_i915_private *i915 = vgpu->gvt->gt->i915; - int ret; if (!vgpu->attached) return; @@ -864,11 +833,6 @@ static void intel_vgpu_close_device(struct vfio_device *vfio_dev) intel_gvt_release_vgpu(vgpu); - ret = vfio_unregister_notifier(&vgpu->vfio_device, VFIO_IOMMU_NOTIFY, - &vgpu->iommu_notifier); - drm_WARN(&i915->drm, ret, - "vfio_unregister_notifier for iommu failed: %d\n", ret); - debugfs_remove(debugfs_lookup(KVMGT_DEBUGFS_FILENAME, vgpu->debugfs)); kvm_page_track_unregister_notifier(vgpu->vfio_device.kvm, @@ -1610,6 +1574,7 @@ static const struct vfio_device_ops intel_vgpu_dev_ops = { .write = intel_vgpu_write, .mmap = intel_vgpu_mmap, .ioctl = intel_vgpu_ioctl, + .dma_unmap = intel_vgpu_dma_unmap, }; static int intel_vgpu_probe(struct mdev_device *mdev) diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c index b49e2e9db2dc6f..d7e3c3ea98518f 100644 --- a/drivers/s390/cio/vfio_ccw_ops.c +++ b/drivers/s390/cio/vfio_ccw_ops.c @@ -44,31 +44,19 @@ static int vfio_ccw_mdev_reset(struct vfio_ccw_private *private) return ret; } -static int vfio_ccw_mdev_notifier(struct notifier_block *nb, - unsigned long action, - void *data) +static void vfio_ccw_dma_unmap(struct vfio_device *vdev, u64 iova, u64 length) { struct vfio_ccw_private *private = - container_of(nb, struct vfio_ccw_private, nb); - - /* - * Vendor drivers MUST unpin pages in response to an - * invalidation. - */ - if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) { - struct vfio_iommu_type1_dma_unmap *unmap = data; - - if (!cp_iova_pinned(&private->cp, unmap->iova)) - return NOTIFY_OK; + container_of(vdev, struct vfio_ccw_private, vdev); - if (vfio_ccw_mdev_reset(private)) - return NOTIFY_BAD; + /* Vendor drivers MUST unpin pages in response to an invalidation. */ + if (!cp_iova_pinned(&private->cp, iova)) + return; - cp_free(&private->cp); - return NOTIFY_OK; - } + if (vfio_ccw_mdev_reset(private)) + return; - return NOTIFY_DONE; + cp_free(&private->cp); } static ssize_t name_show(struct mdev_type *mtype, @@ -178,19 +166,11 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev) { struct vfio_ccw_private *private = container_of(vdev, struct vfio_ccw_private, vdev); - unsigned long events = VFIO_IOMMU_NOTIFY_DMA_UNMAP; int ret; - private->nb.notifier_call = vfio_ccw_mdev_notifier; - - ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, - &events, &private->nb); - if (ret) - return ret; - ret = vfio_ccw_register_async_dev_regions(private); if (ret) - goto out_unregister; + return ret; ret = vfio_ccw_register_schib_dev_regions(private); if (ret) @@ -204,7 +184,6 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev) out_unregister: vfio_ccw_unregister_dev_regions(private); - vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb); return ret; } @@ -222,7 +201,6 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev) cp_free(&private->cp); vfio_ccw_unregister_dev_regions(private); - vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb); } static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private, @@ -645,6 +623,7 @@ static const struct vfio_device_ops vfio_ccw_dev_ops = { .write = vfio_ccw_mdev_write, .ioctl = vfio_ccw_mdev_ioctl, .request = vfio_ccw_mdev_request, + .dma_unmap = vfio_ccw_dma_unmap, }; struct mdev_driver vfio_ccw_mdev_driver = { diff --git a/drivers/s390/cio/vfio_ccw_private.h b/drivers/s390/cio/vfio_ccw_private.h index 7272eb78861244..2627791c9006d4 100644 --- a/drivers/s390/cio/vfio_ccw_private.h +++ b/drivers/s390/cio/vfio_ccw_private.h @@ -98,7 +98,6 @@ struct vfio_ccw_private { struct completion *completion; atomic_t avail; struct mdev_device *mdev; - struct notifier_block nb; struct ccw_io_region *io_region; struct mutex io_mutex; struct vfio_ccw_region *region; diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c index a7d2a95796d360..65b2bd44dd35b8 100644 --- a/drivers/s390/crypto/vfio_ap_ops.c +++ b/drivers/s390/crypto/vfio_ap_ops.c @@ -1227,33 +1227,20 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev, } /** - * vfio_ap_mdev_iommu_notifier - IOMMU notifier callback + * vfio_ap_mdev_dma_unmap - Notifier that IOVA has been unmapped + * @vdev: The VFIO device + * @unmap: IOVA range unmapped * - * @nb: The notifier block - * @action: Action to be taken - * @data: data associated with the request - * - * For an UNMAP request, unpin the guest IOVA (the NIB guest address we - * pinned before). Other requests are ignored. - * - * Return: for an UNMAP request, NOFITY_OK; otherwise NOTIFY_DONE. + * Unpin the guest IOVA (the NIB guest address we pinned before). */ -static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb, - unsigned long action, void *data) +static void vfio_ap_mdev_dma_unmap(struct vfio_device *vdev, u64 iova, + u64 length) { - struct ap_matrix_mdev *matrix_mdev; - - matrix_mdev = container_of(nb, struct ap_matrix_mdev, iommu_notifier); - - if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) { - struct vfio_iommu_type1_dma_unmap *unmap = data; - unsigned long g_pfn = unmap->iova >> PAGE_SHIFT; - - vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1); - return NOTIFY_OK; - } + struct ap_matrix_mdev *matrix_mdev = + container_of(vdev, struct ap_matrix_mdev, vdev); + unsigned long g_pfn = iova >> PAGE_SHIFT; - return NOTIFY_DONE; + vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1); } /** @@ -1380,27 +1367,11 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev) { struct ap_matrix_mdev *matrix_mdev = container_of(vdev, struct ap_matrix_mdev, vdev); - unsigned long events; - int ret; if (!vdev->kvm) return -EINVAL; - ret = vfio_ap_mdev_set_kvm(matrix_mdev, vdev->kvm); - if (ret) - return ret; - - matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier; - events = VFIO_IOMMU_NOTIFY_DMA_UNMAP; - ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events, - &matrix_mdev->iommu_notifier); - if (ret) - goto err_kvm; - return 0; - -err_kvm: - vfio_ap_mdev_unset_kvm(matrix_mdev); - return ret; + return vfio_ap_mdev_set_kvm(matrix_mdev, vdev->kvm); } static void vfio_ap_mdev_close_device(struct vfio_device *vdev) @@ -1408,8 +1379,6 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev) struct ap_matrix_mdev *matrix_mdev = container_of(vdev, struct ap_matrix_mdev, vdev); - vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, - &matrix_mdev->iommu_notifier); vfio_ap_mdev_unset_kvm(matrix_mdev); } @@ -1461,6 +1430,7 @@ static const struct vfio_device_ops vfio_ap_matrix_dev_ops = { .open_device = vfio_ap_mdev_open_device, .close_device = vfio_ap_mdev_close_device, .ioctl = vfio_ap_mdev_ioctl, + .dma_unmap = vfio_ap_mdev_dma_unmap, }; static struct mdev_driver vfio_ap_matrix_driver = { diff --git a/drivers/s390/crypto/vfio_ap_private.h b/drivers/s390/crypto/vfio_ap_private.h index a26efd804d0df3..abb59d59f81b20 100644 --- a/drivers/s390/crypto/vfio_ap_private.h +++ b/drivers/s390/crypto/vfio_ap_private.h @@ -81,8 +81,6 @@ struct ap_matrix { * @node: allows the ap_matrix_mdev struct to be added to a list * @matrix: the adapters, usage domains and control domains assigned to the * mediated matrix device. - * @iommu_notifier: notifier block used for specifying callback function for - * handling the VFIO_IOMMU_NOTIFY_DMA_UNMAP even * @kvm: the struct holding guest's state * @pqap_hook: the function pointer to the interception handler for the * PQAP(AQIC) instruction. @@ -92,7 +90,6 @@ struct ap_matrix_mdev { struct vfio_device vdev; struct list_head node; struct ap_matrix matrix; - struct notifier_block iommu_notifier; struct kvm *kvm; crypto_hook pqap_hook; struct mdev_device *mdev; diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 61e71c1154be67..f005b644ab9e69 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1077,8 +1077,20 @@ static void vfio_device_unassign_container(struct vfio_device *device) up_write(&device->group->group_rwsem); } +static int vfio_iommu_notifier(struct notifier_block *nb, unsigned long action, + void *data) +{ + struct vfio_device *vfio_device = + container_of(nb, struct vfio_device, iommu_nb); + struct vfio_iommu_type1_dma_unmap *unmap = data; + + vfio_device->ops->dma_unmap(vfio_device, unmap->iova, unmap->size); + return NOTIFY_OK; +} + static struct file *vfio_device_open(struct vfio_device *device) { + struct vfio_iommu_driver *iommu_driver; struct file *filep; int ret; @@ -1109,6 +1121,18 @@ static struct file *vfio_device_open(struct vfio_device *device) if (ret) goto err_undo_count; } + + iommu_driver = device->group->container->iommu_driver; + if (device->ops->dma_unmap && iommu_driver && + iommu_driver->ops->register_notifier) { + unsigned long events = VFIO_IOMMU_NOTIFY_DMA_UNMAP; + + device->iommu_nb.notifier_call = vfio_iommu_notifier; + iommu_driver->ops->register_notifier( + device->group->container->iommu_data, &events, + &device->iommu_nb); + } + up_read(&device->group->group_rwsem); } mutex_unlock(&device->dev_set->lock); @@ -1143,8 +1167,16 @@ static struct file *vfio_device_open(struct vfio_device *device) err_close_device: mutex_lock(&device->dev_set->lock); down_read(&device->group->group_rwsem); - if (device->open_count == 1 && device->ops->close_device) + if (device->open_count == 1 && device->ops->close_device) { device->ops->close_device(device); + + iommu_driver = device->group->container->iommu_driver; + if (device->ops->dma_unmap && iommu_driver && + iommu_driver->ops->register_notifier) + iommu_driver->ops->unregister_notifier( + device->group->container->iommu_data, + &device->iommu_nb); + } err_undo_count: device->open_count--; if (device->open_count == 0 && device->kvm) @@ -1339,12 +1371,20 @@ static const struct file_operations vfio_group_fops = { static int vfio_device_fops_release(struct inode *inode, struct file *filep) { struct vfio_device *device = filep->private_data; + struct vfio_iommu_driver *iommu_driver; mutex_lock(&device->dev_set->lock); vfio_assert_device_open(device); down_read(&device->group->group_rwsem); if (device->open_count == 1 && device->ops->close_device) device->ops->close_device(device); + + iommu_driver = device->group->container->iommu_driver; + if (device->ops->dma_unmap && iommu_driver && + iommu_driver->ops->register_notifier) + iommu_driver->ops->unregister_notifier( + device->group->container->iommu_data, + &device->iommu_nb); up_read(&device->group->group_rwsem); device->open_count--; if (device->open_count == 0) @@ -2027,90 +2067,6 @@ int vfio_dma_rw(struct vfio_device *device, dma_addr_t user_iova, void *data, } EXPORT_SYMBOL(vfio_dma_rw); -static int vfio_register_iommu_notifier(struct vfio_group *group, - unsigned long *events, - struct notifier_block *nb) -{ - struct vfio_container *container; - struct vfio_iommu_driver *driver; - int ret; - - lockdep_assert_held_read(&group->group_rwsem); - - container = group->container; - driver = container->iommu_driver; - if (likely(driver && driver->ops->register_notifier)) - ret = driver->ops->register_notifier(container->iommu_data, - events, nb); - else - ret = -ENOTTY; - - return ret; -} - -static int vfio_unregister_iommu_notifier(struct vfio_group *group, - struct notifier_block *nb) -{ - struct vfio_container *container; - struct vfio_iommu_driver *driver; - int ret; - - lockdep_assert_held_read(&group->group_rwsem); - - container = group->container; - driver = container->iommu_driver; - if (likely(driver && driver->ops->unregister_notifier)) - ret = driver->ops->unregister_notifier(container->iommu_data, - nb); - else - ret = -ENOTTY; - - return ret; -} - -int vfio_register_notifier(struct vfio_device *device, - enum vfio_notify_type type, unsigned long *events, - struct notifier_block *nb) -{ - struct vfio_group *group = device->group; - int ret; - - if (!nb || !events || (*events == 0) || - !vfio_assert_device_open(device)) - return -EINVAL; - - switch (type) { - case VFIO_IOMMU_NOTIFY: - ret = vfio_register_iommu_notifier(group, events, nb); - break; - default: - ret = -EINVAL; - } - return ret; -} -EXPORT_SYMBOL(vfio_register_notifier); - -int vfio_unregister_notifier(struct vfio_device *device, - enum vfio_notify_type type, - struct notifier_block *nb) -{ - struct vfio_group *group = device->group; - int ret; - - if (!nb || !vfio_assert_device_open(device)) - return -EINVAL; - - switch (type) { - case VFIO_IOMMU_NOTIFY: - ret = vfio_unregister_iommu_notifier(group, nb); - break; - default: - ret = -EINVAL; - } - return ret; -} -EXPORT_SYMBOL(vfio_unregister_notifier); - /* * Module/class support */ diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h index a6713022115155..cb2e4e9baa8fe8 100644 --- a/drivers/vfio/vfio.h +++ b/drivers/vfio/vfio.h @@ -33,6 +33,11 @@ enum vfio_iommu_notify_type { VFIO_IOMMU_CONTAINER_CLOSE = 0, }; +/* events for register_notifier() */ +enum { + VFIO_IOMMU_NOTIFY_DMA_UNMAP = 1, +}; + /** * struct vfio_iommu_driver_ops - VFIO IOMMU driver callbacks */ diff --git a/include/linux/vfio.h b/include/linux/vfio.h index aa888cc517578e..b76623e3b92fca 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -44,6 +44,7 @@ struct vfio_device { unsigned int open_count; struct completion comp; struct list_head group_next; + struct notifier_block iommu_nb; }; /** @@ -60,6 +61,8 @@ struct vfio_device { * @match: Optional device name match callback (return: 0 for no-match, >0 for * match, -errno for abort (ex. match with insufficient or incorrect * additional args) + * @dma_unmap: Called when userspace unmaps IOVA from the container + * this device is attached to. * @device_feature: Optional, fill in the VFIO_DEVICE_FEATURE ioctl * @migration_set_state: Optional callback to change the migration state for * devices that support migration. It's mandatory for @@ -85,6 +88,7 @@ struct vfio_device_ops { int (*mmap)(struct vfio_device *vdev, struct vm_area_struct *vma); void (*request)(struct vfio_device *vdev, unsigned int count); int (*match)(struct vfio_device *vdev, char *buf); + void (*dma_unmap)(struct vfio_device *vdev, u64 iova, u64 length); int (*device_feature)(struct vfio_device *device, u32 flags, void __user *arg, size_t argsz); struct file *(*migration_set_state)( @@ -154,23 +158,6 @@ extern int vfio_unpin_pages(struct vfio_device *device, unsigned long *user_pfn, extern int vfio_dma_rw(struct vfio_device *device, dma_addr_t user_iova, void *data, size_t len, bool write); -/* each type has independent events */ -enum vfio_notify_type { - VFIO_IOMMU_NOTIFY = 0, -}; - -/* events for VFIO_IOMMU_NOTIFY */ -#define VFIO_IOMMU_NOTIFY_DMA_UNMAP BIT(0) - -extern int vfio_register_notifier(struct vfio_device *device, - enum vfio_notify_type type, - unsigned long *required_events, - struct notifier_block *nb); -extern int vfio_unregister_notifier(struct vfio_device *device, - enum vfio_notify_type type, - struct notifier_block *nb); - - /* * Sub-module helpers */ From patchwork Tue Jun 7 00:34:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 12871212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E836ACCA481 for ; Tue, 7 Jun 2022 00:34:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A360310FD63; Tue, 7 Jun 2022 00:34:42 +0000 (UTC) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2052.outbound.protection.outlook.com [40.107.96.52]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3C06410FBCF; Tue, 7 Jun 2022 00:34:41 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=htixEmNFa+Jl0FItqT3vyjBkvuUKePzBMxUJuhGYNxLFvT2Rlm/p00ov73D66QbWPdL5l4O9SLmYfpT+WH+DrljGqNUjLeVDQX0oIQnU4Hv2UiQ9z9qNX0597e13bF+zRS8PiP2dHW8ykVNsVP1QChrw4ZuzG2W/A6Wo66rkttHN23ta6yrl7kz1AJmmfIiE722jBts5omKB2l8+1TWo33ccwfUk3NHQaCIbGSi3DKya5VdkOi8uGyw63X+tteLB+6X7AHSNzwzeVI/m8G043ANekNP2TZKp1AJ32KrQP1dv+FS/xxJa/tOm+rVPLFcfabpxVC/4c6ds30LB79cNOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rPtpXOYcanPo+YHfXSQeObx0p/KhPykFgGq5MYH9ekQ=; b=Mj+6cABbOmbnLk9ikNrgIlsJ+zXCQkDNH+lvRVc4CAQkInGR6w4xRHd0Czhb7DHyoOxP3p1son+mp+ocVChXcm4UUcCzBPNavQkUl2VrBIe8XL6QzJzlz2JAZOJC2LUuWfTIMbfCHR5RpvKXeUsXlxCimOlfEuPjlx1zPtcdyFPEDLOEsJB0gWwXspUJuMWr1l+px1V2zM8IdoHcfuAg39VvjvRU7ZuX5+BDaDMI3Oug3Q9pFw3aJD2HwhwMMyh39bsqXg8nMCIJYGHRY7k5jnudCcGgzNdNnCxXhkmo5+W8ZHoHfcxThGwnPValMU9dXJWr3aKNbauTdC5xZN7ifA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rPtpXOYcanPo+YHfXSQeObx0p/KhPykFgGq5MYH9ekQ=; b=egtcbZN3H8l2sNb931qHRZ3G0w0/GxSfugV7YSFGGHFFGL7uvCm4zvDWTWX1ncvRYnRPJwiIT5knWI622HeDnI35E3zbF9DberyfBkwJ9jVl5J78j8Bau6p8do7Z++dALc3lIt1azjr93slGoTZpuaJy2vKsoc5TIM/oL9UUuZ3yDi5mdZKvsDGKboZd0IlO7sPgjoaIkjyxseTxC71e8dinCEWs63D0hKX4+comCHfqgGpbnsrYPQMW2tpyyu5aF38e9haeOZ6p6od8J2nM2lQSafkqr08CfhGmVFRs8qIiiWg81SRgp+zO7H24gGg88SaMQHAZcDsRkLuKe6dEpg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from MN2PR12MB4192.namprd12.prod.outlook.com (2603:10b6:208:1d5::15) by MN2PR12MB3758.namprd12.prod.outlook.com (2603:10b6:208:169::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Tue, 7 Jun 2022 00:34:38 +0000 Received: from MN2PR12MB4192.namprd12.prod.outlook.com ([fe80::2484:51da:d56f:f1a5]) by MN2PR12MB4192.namprd12.prod.outlook.com ([fe80::2484:51da:d56f:f1a5%9]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022 00:34:38 +0000 From: Jason Gunthorpe To: Alexander Gordeev , David Airlie , Tony Krowiak , Alex Williamson , Christian Borntraeger , Cornelia Huck , Daniel Vetter , dri-devel@lists.freedesktop.org, Eric Farman , Harald Freudenberger , Vasily Gorbik , Heiko Carstens , intel-gfx@lists.freedesktop.org, intel-gvt-dev@lists.freedesktop.org, Jani Nikula , Jason Herne , Joonas Lahtinen , kvm@vger.kernel.org, linux-s390@vger.kernel.org, Matthew Rosato , Peter Oberparleiter , Halil Pasic , Rodrigo Vivi , Sven Schnelle , Tvrtko Ursulin , Vineeth Vijayan , Zhenyu Wang , Zhi Wang Date: Mon, 6 Jun 2022 21:34:36 -0300 Message-Id: <2-v1-896844109f36+a-vfio_unmap_notif_jgg@nvidia.com> In-Reply-To: <0-v1-896844109f36+a-vfio_unmap_notif_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR12CA0009.namprd12.prod.outlook.com (2603:10b6:208:a8::22) To MN2PR12MB4192.namprd12.prod.outlook.com (2603:10b6:208:1d5::15) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ffc67675-87a8-4d74-680d-08da481d808e X-MS-TrafficTypeDiagnostic: MN2PR12MB3758:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FBRJa6FOopQegvGgGbse9kq8eeJs0vFxqDxqboipZVnwiO4oHl9H5UJMKwlC7V5q146HhrgRPq/JWE5O+E6bIPDuRnsfxbKcSbHcX8br4zW/n74aFNmKNIYkb1vYPLUxyIGEtEVRrZv/QrjVTcSyBskuoHvA5v9547RS19x4nTr8zulrd+bYdtvuPmT8uMY7JD3BgtI9pcwai5nPmXUa7LjTX81ERYbY+ku+p8kGszn7hOUZoKQ4UqquBkDk0fUwLdz9QMlJMHTPyFDlVwtuqeepnyUZ/sEHP1O7GG6/fGfBOsQu3O5xpAfflbf/xqAyUfk+1F1YwUOgHId/DViYS/TdKUh15tLEFNzEJ+D22RTNdTAumgYT+2bZGqeQOBVEojOFlRNQd+syE7i4xsOEZgXlU+xlluO4emDx4xUO5vYP7tfzQbCRXlemKFz+zzjnAazPGAHgkYxRfyRYPTmD9Va6HKreeoIUzxm8Es9Kc828bZKi0fGAC1ACd3QjxGmdl66EHn5k4VnmM1cMrjWl1u0fCWmkbVPG96BLSXocfzZO1Tu9BY44Lr03rly4Q5sWAfWJsxB8cdk9gzlDmNvbmP8rBilD6V3UeJUBx7pSbnMhm8gmzBOiE6TnCTpjTD8IAUZ/+UypAq55e0bfPhSAnl1E1+F/Bj8FaoBSNvGsLw+yM/aegKqZ4wOT+8OM2qHMsswrBiZEESnWYkKsG7EHNeO206Pp6WM+JzhQxfaFs5c= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MN2PR12MB4192.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(6506007)(66476007)(38100700002)(66946007)(86362001)(6486002)(4326008)(8676002)(7416002)(2616005)(186003)(8936002)(36756003)(2906002)(5660300002)(66556008)(508600001)(83380400001)(6512007)(110136005)(316002)(26005)(921005)(4216001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: H/EQVjirAjrfX4htvyo5n1hX81dFnDsykdQGMVkNwcpjyLS06UAZUejGVqMg7nYY6lHSbBdqXQBx0sAhsl8bYOo07rrxims2T2M2SzS3j5DosXdCZL87f/EPfjkXeZa1JNFgKNx/Srn1Yo1xrRCzJnYwThJK6x4cWJLBspY78EgIwkXF5ZfGUQ4vY2f83uvcv3wYCPpJDbHIPCt/HfF/CJLCBvlZK+6/JLMb4hN9CZCdh/t9l8f9A6mH4MXULpshuOm0iKrXIzutX+ZsAeOo8E1/+4dVyci7opT5EjZbbwbELLXS9y/6OWqLFwKcBLsR/0R7pqVUrs5Ii7nW7zcZl0LswLdy3MO+nQL2enosOb8jGkH9q83FneMvJlOUpTYIhUmIsui0ww3EiHv11rSxyYa1XY0wKJdUGWjnfhSKwoKK1JbGP3/LSwClkyak1e61YfXbvYJIrxuDhd53k71NWPdxE3Zxm1UKaAC07bWAWEHJ9tj1clhhB8fpVBAuIcWwxPWTXRVW+QtnOFh7jbHoB2JtwcrTa47j2llZ6aPwM5GosWLjgeBJQqhx7K4I13XZKt5+Ps2/UmhllQnY9ISzlP7qbtPquFga9b64yrrsLp+HnrIsjrrTTbtYoriwwQPnQua8r15x8NlwO1KGiYlyIIaZJ7Ply43WXzUkZA4+inOkExUMKfWYz6w2EFV9FxX8PZw8p5A+Jg0GtL2UE+7SJ2DxQfefrFkxpAGOkwvESkYyTuS2M4iVxO55RGc65CPVpMco+NRhinxRRNqBMjZ1PvHm23zQFcZK1oVshG+3JPq01dk8n72yHjOJjhaQdrtyOGGK0AUOgDbKYpStWW67GXPxqVHgpb8xZ6Uzxradb6bv2f9yXPqtQ/0mFNVtxg8k+c80wouRE/PQGBEDSvfrR/YIKnyFTpuNtZaaLzsbHTpU5EXLntOFRzVM9XOzMW8Eei/SkKxznB1n5Vxrf2FM+3HOVJJuzWnWPjm7hPdHjEqURrLC8zjtqVhktBwqAO2eBhfmgT7wJqrxoy1CPOk1smqs9jQNerdnjVH06RLPzlczo1XNl5at8t46c+QqDalkS3Bgbre3Fv2WmIiV3+nrqH5s6Y9IDqXAvJ2SR9HRmT9hQqBmPipIj5jwcPjgorznFn6ULgYP8A/tsbHyZB44WoQcR3SRnHHDcbjyhRaKF006+6jYJJ2LO+bK0gHfKKyReciCDpHitNStIep/s3rQ0mEiT3UtVqXnoZStxX/1qj9+dVl6fqcVo7Furr3Pr/TIg4yUjDkdiqNmZYYXsynipu3L52+0pKRC3ky3tyhhIHZ/lRUVM9ke9VQA+aMwac9KU4fZyiRYCL3yWEWRAF9DI1r9EpZ0h9hb3xgDKW/+lj6vTGxg3Y5X4jUCYmtCn92SrQASiiz96mnTfi6w1lqBfgKMpDVI9jNOw6NlWGt8jcYOWynufnpnQXwccd9S4v/LiKBr/7MYWdCnbY84+sy3eiYKEypOtXjTi2xGocdeGNqLTUgwF9t79nUELJi5N9Z0c02/9+yTB7N+ME1vzDDBZi2jcqb88WZ0Z2Cjxl/mqRkBhjlAVIAQ3eekx9G6DVu3WSiAFcDeuw7ypAwZPr6SRy/W+PJDjgxExyxR2cXjdffpFLSndFeGm34t5wHHM/hL7EBMqzdOag/o6It4Oed6uPTkRhKOsdrJiEly8/NfslkAiNcEeLhKoYtTm8fpW5AVDrsbAKHk5y5qxGkpNioenA== X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: ffc67675-87a8-4d74-680d-08da481d808e X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB4192.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 00:34:37.7070 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: c4HZblzn8YvgqeXrZyMCRqYjGTULeChZG/NG/TNSXzFU5lP8AD62z4lyHKW3Jx4p X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3758 Subject: [Intel-gfx] [PATCH 2/2] vfio: Replace the iommu notifier with a device list X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Christoph Hellwig Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Instead of bouncing the function call to the driver op through a blocking notifier just have the iommu layer call it directly. Register each device that is being attached to the iommu with the lower driver which then threads them on a linked list and calls the appropriate driver op at the right time. Currently the only use is if dma_unmap() is defined. Also, fully lock all the debugging tests on the pinning path that a dma_unmap is registered. Signed-off-by: Jason Gunthorpe --- drivers/vfio/vfio.c | 39 ++++------------- drivers/vfio/vfio.h | 14 ++----- drivers/vfio/vfio_iommu_type1.c | 74 ++++++++++++++++++++------------- include/linux/vfio.h | 2 +- 4 files changed, 58 insertions(+), 71 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index f005b644ab9e69..05623f52e38d32 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1077,17 +1077,6 @@ static void vfio_device_unassign_container(struct vfio_device *device) up_write(&device->group->group_rwsem); } -static int vfio_iommu_notifier(struct notifier_block *nb, unsigned long action, - void *data) -{ - struct vfio_device *vfio_device = - container_of(nb, struct vfio_device, iommu_nb); - struct vfio_iommu_type1_dma_unmap *unmap = data; - - vfio_device->ops->dma_unmap(vfio_device, unmap->iova, unmap->size); - return NOTIFY_OK; -} - static struct file *vfio_device_open(struct vfio_device *device) { struct vfio_iommu_driver *iommu_driver; @@ -1123,15 +1112,9 @@ static struct file *vfio_device_open(struct vfio_device *device) } iommu_driver = device->group->container->iommu_driver; - if (device->ops->dma_unmap && iommu_driver && - iommu_driver->ops->register_notifier) { - unsigned long events = VFIO_IOMMU_NOTIFY_DMA_UNMAP; - - device->iommu_nb.notifier_call = vfio_iommu_notifier; - iommu_driver->ops->register_notifier( - device->group->container->iommu_data, &events, - &device->iommu_nb); - } + if (iommu_driver && iommu_driver->ops->register_device) + iommu_driver->ops->register_device( + device->group->container->iommu_data, device); up_read(&device->group->group_rwsem); } @@ -1171,11 +1154,9 @@ static struct file *vfio_device_open(struct vfio_device *device) device->ops->close_device(device); iommu_driver = device->group->container->iommu_driver; - if (device->ops->dma_unmap && iommu_driver && - iommu_driver->ops->register_notifier) - iommu_driver->ops->unregister_notifier( - device->group->container->iommu_data, - &device->iommu_nb); + if (iommu_driver && iommu_driver->ops->register_device) + iommu_driver->ops->unregister_device( + device->group->container->iommu_data, device); } err_undo_count: device->open_count--; @@ -1380,11 +1361,9 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep) device->ops->close_device(device); iommu_driver = device->group->container->iommu_driver; - if (device->ops->dma_unmap && iommu_driver && - iommu_driver->ops->register_notifier) - iommu_driver->ops->unregister_notifier( - device->group->container->iommu_data, - &device->iommu_nb); + if (iommu_driver && iommu_driver->ops->unregister_device) + iommu_driver->ops->unregister_device( + device->group->container->iommu_data, device); up_read(&device->group->group_rwsem); device->open_count--; if (device->open_count == 0) diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h index cb2e4e9baa8fe8..4a7db1f3c33e7e 100644 --- a/drivers/vfio/vfio.h +++ b/drivers/vfio/vfio.h @@ -33,11 +33,6 @@ enum vfio_iommu_notify_type { VFIO_IOMMU_CONTAINER_CLOSE = 0, }; -/* events for register_notifier() */ -enum { - VFIO_IOMMU_NOTIFY_DMA_UNMAP = 1, -}; - /** * struct vfio_iommu_driver_ops - VFIO IOMMU driver callbacks */ @@ -60,11 +55,10 @@ struct vfio_iommu_driver_ops { unsigned long *phys_pfn); int (*unpin_pages)(void *iommu_data, unsigned long *user_pfn, int npage); - int (*register_notifier)(void *iommu_data, - unsigned long *events, - struct notifier_block *nb); - int (*unregister_notifier)(void *iommu_data, - struct notifier_block *nb); + void (*register_device)(void *iommu_data, + struct vfio_device *vdev); + void (*unregister_device)(void *iommu_data, + struct vfio_device *vdev); int (*dma_rw)(void *iommu_data, dma_addr_t user_iova, void *data, size_t count, bool write); struct iommu_domain *(*group_iommu_domain)(void *iommu_data, diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index c13b9290e35759..7011fdeaf7db08 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -67,7 +67,8 @@ struct vfio_iommu { struct list_head iova_list; struct mutex lock; struct rb_root dma_list; - struct blocking_notifier_head notifier; + struct list_head device_list; + struct mutex device_list_lock; unsigned int dma_avail; unsigned int vaddr_invalid_count; uint64_t pgsize_bitmap; @@ -865,8 +866,8 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, } } - /* Fail if notifier list is empty */ - if (!iommu->notifier.head) { + /* Fail if no dma_umap notifier is registered */ + if (list_empty(&iommu->device_list)) { ret = -EINVAL; goto pin_done; } @@ -1406,7 +1407,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, } if (!RB_EMPTY_ROOT(&dma->pfn_list)) { - struct vfio_iommu_type1_dma_unmap nb_unmap; + struct vfio_device *device; if (dma_last == dma) { BUG_ON(++retries > 10); @@ -1415,20 +1416,25 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, retries = 0; } - nb_unmap.iova = dma->iova; - nb_unmap.size = dma->size; - /* * Notify anyone (mdev vendor drivers) to invalidate and * unmap iovas within the range we're about to unmap. * Vendor drivers MUST unpin pages in response to an * invalidation. */ - mutex_unlock(&iommu->lock); - blocking_notifier_call_chain(&iommu->notifier, - VFIO_IOMMU_NOTIFY_DMA_UNMAP, - &nb_unmap); - mutex_lock(&iommu->lock); + if (!list_empty(&iommu->device_list)) { + mutex_lock(&iommu->device_list_lock); + mutex_unlock(&iommu->lock); + + list_for_each_entry(device, + &iommu->device_list, + iommu_entry) + device->ops->dma_unmap( + device, dma->iova, dma->size); + + mutex_unlock(&iommu->device_list_lock); + mutex_lock(&iommu->lock); + } goto again; } @@ -2478,7 +2484,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, if (list_empty(&iommu->emulated_iommu_groups) && list_empty(&iommu->domain_list)) { - WARN_ON(iommu->notifier.head); + WARN_ON(!list_empty(&iommu->device_list)); vfio_iommu_unmap_unpin_all(iommu); } goto detach_group_done; @@ -2510,7 +2516,8 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, if (list_empty(&domain->group_list)) { if (list_is_singular(&iommu->domain_list)) { if (list_empty(&iommu->emulated_iommu_groups)) { - WARN_ON(iommu->notifier.head); + WARN_ON(!list_empty( + &iommu->device_list)); vfio_iommu_unmap_unpin_all(iommu); } else { vfio_iommu_unmap_unpin_reaccount(iommu); @@ -2571,7 +2578,8 @@ static void *vfio_iommu_type1_open(unsigned long arg) iommu->dma_avail = dma_entry_limit; iommu->container_open = true; mutex_init(&iommu->lock); - BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); + mutex_init(&iommu->device_list_lock); + INIT_LIST_HEAD(&iommu->device_list); init_waitqueue_head(&iommu->vaddr_wait); iommu->pgsize_bitmap = PAGE_MASK; INIT_LIST_HEAD(&iommu->emulated_iommu_groups); @@ -3008,28 +3016,34 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, } } -static int vfio_iommu_type1_register_notifier(void *iommu_data, - unsigned long *events, - struct notifier_block *nb) +static void vfio_iommu_type1_register_device(void *iommu_data, + struct vfio_device *vdev) { struct vfio_iommu *iommu = iommu_data; - /* clear known events */ - *events &= ~VFIO_IOMMU_NOTIFY_DMA_UNMAP; - - /* refuse to register if still events remaining */ - if (*events) - return -EINVAL; + if (!vdev->ops->dma_unmap) + return; - return blocking_notifier_chain_register(&iommu->notifier, nb); + mutex_lock(&iommu->lock); + mutex_lock(&iommu->device_list_lock); + list_add(&vdev->iommu_entry, &iommu->device_list); + mutex_unlock(&iommu->device_list_lock); + mutex_unlock(&iommu->lock); } -static int vfio_iommu_type1_unregister_notifier(void *iommu_data, - struct notifier_block *nb) +static void vfio_iommu_type1_unregister_device(void *iommu_data, + struct vfio_device *vdev) { struct vfio_iommu *iommu = iommu_data; - return blocking_notifier_chain_unregister(&iommu->notifier, nb); + if (!vdev->ops->dma_unmap) + return; + + mutex_lock(&iommu->lock); + mutex_lock(&iommu->device_list_lock); + list_del(&vdev->iommu_entry); + mutex_unlock(&iommu->device_list_lock); + mutex_unlock(&iommu->lock); } static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu, @@ -3163,8 +3177,8 @@ static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = { .detach_group = vfio_iommu_type1_detach_group, .pin_pages = vfio_iommu_type1_pin_pages, .unpin_pages = vfio_iommu_type1_unpin_pages, - .register_notifier = vfio_iommu_type1_register_notifier, - .unregister_notifier = vfio_iommu_type1_unregister_notifier, + .register_device = vfio_iommu_type1_register_device, + .unregister_device = vfio_iommu_type1_unregister_device, .dma_rw = vfio_iommu_type1_dma_rw, .group_iommu_domain = vfio_iommu_type1_group_iommu_domain, .notify = vfio_iommu_type1_notify, diff --git a/include/linux/vfio.h b/include/linux/vfio.h index b76623e3b92fca..c22d3f1e13b66c 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -44,7 +44,7 @@ struct vfio_device { unsigned int open_count; struct completion comp; struct list_head group_next; - struct notifier_block iommu_nb; + struct list_head iommu_entry; }; /**