From patchwork Sun Aug 20 21:53:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13358902 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F35BEE49A6 for ; Sun, 20 Aug 2023 21:54:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5E70710E168; Sun, 20 Aug 2023 21:54:19 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 765CC10E160 for ; Sun, 20 Aug 2023 21:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692568454; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+R/sWlMT7wVG64uORegqci3bW98/zYiz5u+hdhEocSI=; b=TMSzVnoccO46eCe1Liy2Ud9OP39rwo9tn+Exw24xwMf86uUUWvoDj0CSn2n+YV9EqRhZds nZ1ll0wTrDEN8+11I4aDw2VfqAkirej8X8QsDMt8LEfj7u0DI4kKOriyqaBd15+/JS6sw7 BFOjXKp2kq0KNavWNCk9MpHtSFFiJFc= Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-658-Phnw9hU6PFqDgLS3V-7eMA-1; Sun, 20 Aug 2023 17:54:13 -0400 X-MC-Unique: Phnw9hU6PFqDgLS3V-7eMA-1 Received: by mail-lf1-f71.google.com with SMTP id 2adb3069b0e04-4ff9e0ad205so2516059e87.1 for ; Sun, 20 Aug 2023 14:54:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692568452; x=1693173252; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+R/sWlMT7wVG64uORegqci3bW98/zYiz5u+hdhEocSI=; b=D6S0Wj+I+WEX6wYt2QAkmrQRe+cTuWvNHDuszpNvjZIK/tzKMM8CnnxPj0yzVb73MC WX+ehoJmuvrwHwBnMEhQa+eEUiAlh7DHa0VaIp5OWxlFQNHiI1Lx8Z5gG991bY9gJ/rD bwD4w0+Ln0uzGjEVJ/1pi0k7AJkL65SDVhKTh8vWCMJVFr6TaA+8KNhXoy9OMw4VhVrl LkhAxKg7jEjA6nDjWipoVv6DFcgeYPCs8YtY+4ONwIsGiaKHA9csRgJ8fhJhs35tvGYd chnG4Pp/0VUwGtQ1Y2nsKx8Z4HNvbLf+tfL80TW3cSAqBJvkcmip9Yd5rDOTgvzL6B/5 jW3g== X-Gm-Message-State: AOJu0Yx1YP+hG2U7H7sAsUYXC0eHSK13fJfHKA2WO0BaDKeGF/jxwVnO eW0JirLi6x2wxCCl9R1zh+XI5+I1bvuG4P9StJ45fRnbfRQOPs6sNA6UbBhPpQLZrKlJF5M/Ymk Tqa0BGHXmH6FbUdDxHnRB5wjM0pF3 X-Received: by 2002:a05:6512:2396:b0:4fd:fafd:1ed4 with SMTP id c22-20020a056512239600b004fdfafd1ed4mr4221920lfv.2.1692568451863; Sun, 20 Aug 2023 14:54:11 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEQOe6EUHANI7lcqkPBWc4kZcyrIJjTjqCjCwwhLtlytYfuYgg2prfqx79jcLJiK4a6N2mu/w== X-Received: by 2002:a05:6512:2396:b0:4fd:fafd:1ed4 with SMTP id c22-20020a056512239600b004fdfafd1ed4mr4221907lfv.2.1692568451672; Sun, 20 Aug 2023 14:54:11 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id f11-20020aa7d84b000000b005257f2c057fsm4897287eds.33.2023.08.20.14.54.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Aug 2023 14:54:11 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com, bskeggs@redhat.com, Liam.Howlett@oracle.com Subject: [PATCH drm-misc-next 1/3] drm: drm_exec: build always builtin Date: Sun, 20 Aug 2023 23:53:08 +0200 Message-ID: <20230820215320.4187-2-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230820215320.4187-1-dakr@redhat.com> References: <20230820215320.4187-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" drm_exec must always be builtin for the DRM GPUVA manager to depend on it. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/Kconfig | 6 ------ drivers/gpu/drm/Makefile | 3 +-- drivers/gpu/drm/nouveau/Kconfig | 1 - 3 files changed, 1 insertion(+), 9 deletions(-) diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index ab9ef1c20349..85122d4bb1e7 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -210,12 +210,6 @@ config DRM_TTM_KUNIT_TEST If in doubt, say "N". -config DRM_EXEC - tristate - depends on DRM - help - Execution context for command submissions - config DRM_BUDDY tristate depends on DRM diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 215e78e79125..388e0964a875 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -23,6 +23,7 @@ drm-y := \ drm_dumb_buffers.o \ drm_edid.o \ drm_encoder.o \ + drm_exec.o \ drm_file.o \ drm_fourcc.o \ drm_framebuffer.o \ @@ -80,8 +81,6 @@ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o # Memory-management helpers # # -obj-$(CONFIG_DRM_EXEC) += drm_exec.o - obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o drm_dma_helper-y := drm_gem_dma_helper.o diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig index c52e8096cca4..2dddedac125b 100644 --- a/drivers/gpu/drm/nouveau/Kconfig +++ b/drivers/gpu/drm/nouveau/Kconfig @@ -10,7 +10,6 @@ config DRM_NOUVEAU select DRM_KMS_HELPER select DRM_TTM select DRM_TTM_HELPER - select DRM_EXEC select DRM_SCHED select I2C select I2C_ALGOBIT From patchwork Sun Aug 20 21:53:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13358903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2374EEE49A6 for ; Sun, 20 Aug 2023 21:54:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1EA6110E16A; Sun, 20 Aug 2023 21:54:23 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id E813E10E169 for ; Sun, 20 Aug 2023 21:54:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692568459; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K/1joqq8EiUQFFzSm2OgPAVJ2o6HPk2QNNsoOmB8U1o=; b=AObQMijkFzTylh/8/A+rE1ektMgai3RpsjzK9uf+n6FE0mXMohB6+hWz+WdX5loWgEKq/7 Zn5gCNpuIWA8HMX49P9tVYN2sSoaG7ZZ1WZG3euna5lHxKBGOWTDOHIG2UaHrLtjk5BhRu GQerX/E7Osu7pQdVZTaA6Qt4bh3FzhU= Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com [209.85.218.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-632-C5zDCY23P_iW_3gSnRy10w-1; Sun, 20 Aug 2023 17:54:17 -0400 X-MC-Unique: C5zDCY23P_iW_3gSnRy10w-1 Received: by mail-ej1-f70.google.com with SMTP id a640c23a62f3a-99bfe6a531bso168824366b.1 for ; Sun, 20 Aug 2023 14:54:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692568456; x=1693173256; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K/1joqq8EiUQFFzSm2OgPAVJ2o6HPk2QNNsoOmB8U1o=; b=ME0Zb9KhsUjTqCDDvFSLFziFMjLyvDaTdHWv+iY5jd4gbI9mJYct4NzHyrWticQJbf U7cbgZDsuYgfms+c57Kj9zXEMJ9MkhJtDA6Sb5SD4VBaTS9UYGs04FMqhw9+B+6Z65NQ rlip54DnujpbZyjF5ShiGskR0x9mgbl9ZzBpbz6s3PhStFWEmoNyk0owMTfw/jijvq60 uNwMe0uYkDjRwumtb5Tr7sPHn/9X00Lf5x+s3lWi+gVeNkq0UBmW+zH2mkbAt6cisez2 f4XOpi9Ho1EOI4/RObo0xUGmm5WzC6KbFoghZWaEZ4tnEp2q8e8ENUsO+f8FR/9NBEt7 iJlw== X-Gm-Message-State: AOJu0YyYyl1zLSOTM3UdBHAZBVCvBPB9m1eDFTz8VdcykIyuKU4FmCW9 xYayI6VXU0pvB2J+m9dcdyK9mjPY13ia5m2BsyZF1gj+NUp2uyjZfafODK9bjlbrl2VmLcTAFk8 O6R2yu+wlFZK7ChgANaZCkb7DQuNR X-Received: by 2002:a17:906:cc0b:b0:99d:fa95:ecc8 with SMTP id ml11-20020a170906cc0b00b0099dfa95ecc8mr3796050ejb.59.1692568456163; Sun, 20 Aug 2023 14:54:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFGYbhcI9SmcMPflX/OPqBH33LXN1aXQZF2msKO29oJoBa6qHKnC9zZBeJMM9lyZ7Upc+TbOQ== X-Received: by 2002:a17:906:cc0b:b0:99d:fa95:ecc8 with SMTP id ml11-20020a170906cc0b00b0099dfa95ecc8mr3796026ejb.59.1692568455548; Sun, 20 Aug 2023 14:54:15 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id r25-20020a170906a21900b00992f309cfe8sm5330056ejy.178.2023.08.20.14.54.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Aug 2023 14:54:15 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com, bskeggs@redhat.com, Liam.Howlett@oracle.com Subject: [PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation Date: Sun, 20 Aug 2023 23:53:09 +0200 Message-ID: <20230820215320.4187-3-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230820215320.4187-1-dakr@redhat.com> References: <20230820215320.4187-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" So far the DRM GPUVA manager offers common infrastructure to track GPU VA allocations and mappings, generically connect GPU VA mappings to their backing buffers and perform more complex mapping operations on the GPU VA space. However, there are more design patterns commonly used by drivers, which can potentially be generalized in order to make the DRM GPUVA manager represent a basic GPU-VM implementation. In this context, this patch aims at generalizing the following elements. 1) Provide a common dma-resv for GEM objects not being used outside of this GPU-VM. 2) Provide tracking of external GEM objects (GEM objects which are shared with other GPU-VMs). 3) Provide functions to efficiently lock all GEM objects dma-resv the GPU-VM contains mappings of. 4) Provide tracking of evicted GEM objects the GPU-VM contains mappings of, such that validation of evicted GEM objects is accelerated. 5) Provide some convinience functions for common patterns. Rather than being designed as a "framework", the target is to make all features appear as a collection of optional helper functions, such that drivers are free to make use of the DRM GPUVA managers basic functionality and opt-in for other features without setting any feature flags, just by making use of the corresponding functions. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_gpuva_mgr.c | 688 +++++++++++++++++++++++++++++++- include/drm/drm_gem.h | 48 ++- include/drm/drm_gpuva_mgr.h | 302 +++++++++++++- 3 files changed, 1010 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c index f86bfad74ff8..69872b205961 100644 --- a/drivers/gpu/drm/drm_gpuva_mgr.c +++ b/drivers/gpu/drm/drm_gpuva_mgr.c @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr, /** * drm_gpuva_manager_init() - initialize a &drm_gpuva_manager * @mgr: pointer to the &drm_gpuva_manager to initialize + * @drm: the drivers &drm_device * @name: the name of the GPU VA space * @start_offset: the start offset of the GPU VA space * @range: the size of the GPU VA space @@ -669,6 +670,7 @@ drm_gpuva_range_valid(struct drm_gpuva_manager *mgr, */ void drm_gpuva_manager_init(struct drm_gpuva_manager *mgr, + struct drm_device *drm, const char *name, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, @@ -677,6 +679,11 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr, mgr->rb.tree = RB_ROOT_CACHED; INIT_LIST_HEAD(&mgr->rb.list); + mt_init(&mgr->mt_ext); + + INIT_LIST_HEAD(&mgr->evict.list); + spin_lock_init(&mgr->evict.lock); + drm_gpuva_check_overflow(start_offset, range); mgr->mm_start = start_offset; mgr->mm_range = range; @@ -694,6 +701,9 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr, reserve_range))) __drm_gpuva_insert(mgr, &mgr->kernel_alloc_node); } + + drm_gem_private_object_init(drm, &mgr->d_obj, 0); + mgr->resv = mgr->d_obj.resv; } EXPORT_SYMBOL_GPL(drm_gpuva_manager_init); @@ -713,10 +723,575 @@ drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr) __drm_gpuva_remove(&mgr->kernel_alloc_node); WARN(!RB_EMPTY_ROOT(&mgr->rb.tree.rb_root), - "GPUVA tree is not empty, potentially leaking memory."); + "GPUVA tree is not empty, potentially leaking memory.\n"); + + mtree_destroy(&mgr->mt_ext); + WARN(!list_empty(&mgr->evict.list), "Evict list should be empty.\n"); + + drm_gem_private_object_fini(&mgr->d_obj); } EXPORT_SYMBOL_GPL(drm_gpuva_manager_destroy); +/** + * drm_gpuva_manager_prepare_objects() - prepare all assoiciated BOs + * @mgr: the &drm_gpuva_manager + * @num_fences: the amount of &dma_fences to reserve + * + * Calls drm_exec_prepare_obj() for all &drm_gem_objects the given + * &drm_gpuva_manager contains mappings of. + * + * Drivers can obtain the corresponding &drm_exec instance through + * DRM_GPUVA_EXEC(). It is the drivers responsibility to call drm_exec_init() + * and drm_exec_fini() accordingly. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuva_manager_prepare_objects(struct drm_gpuva_manager *mgr, + unsigned int num_fences) +{ + struct drm_exec *exec = DRM_GPUVA_EXEC(mgr); + MA_STATE(mas, &mgr->mt_ext, 0, 0); + union { + void *ptr; + uintptr_t cnt; + } ref; + int ret; + + ret = drm_exec_prepare_obj(exec, &mgr->d_obj, num_fences); + if (ret) + goto out; + + rcu_read_lock(); + mas_for_each(&mas, ref.ptr, ULONG_MAX) { + struct drm_gem_object *obj; + + mas_pause(&mas); + rcu_read_unlock(); + + obj = (struct drm_gem_object *)(uintptr_t)mas.index; + ret = drm_exec_prepare_obj(exec, obj, num_fences); + if (ret) + goto out; + + rcu_read_lock(); + } + rcu_read_unlock(); + +out: + return ret; +} +EXPORT_SYMBOL_GPL(drm_gpuva_manager_prepare_objects); + +/** + * drm_gpuva_manager_lock_extra() - lock all dma-resv of all assoiciated BOs + * @mgr: the &drm_gpuva_manager + * @fn: callback received by the driver to lock additional dma-resv + * @priv: private driver data passed to @fn + * @num_fences: the amount of &dma_fences to reserve + * @interruptible: sleep interruptible if waiting + * + * Acquires all dma-resv locks of all &drm_gem_objects the given + * &drm_gpuva_manager contains mappings of. + * + * Addionally, when calling this function the driver receives the given @fn + * callback to lock additional dma-resv in the context of the + * &drm_gpuva_managers &drm_exec instance. Typically, drivers would call + * drm_exec_prepare_obj() from within this callback. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuva_manager_lock_extra(struct drm_gpuva_manager *mgr, + int (*fn)(struct drm_gpuva_manager *mgr, + void *priv, unsigned int num_fences), + void *priv, + unsigned int num_fences, + bool interruptible) +{ + struct drm_exec *exec = DRM_GPUVA_EXEC(mgr); + uint32_t flags; + int ret; + + flags = interruptible ? DRM_EXEC_INTERRUPTIBLE_WAIT : 0 | + DRM_EXEC_IGNORE_DUPLICATES; + + drm_exec_init(exec, flags); + + drm_exec_until_all_locked(exec) { + ret = drm_gpuva_manager_prepare_objects(mgr, num_fences); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + + if (fn) { + ret = fn(mgr, priv, num_fences); + drm_exec_retry_on_contention(exec); + if (ret) + goto err; + } + } + + return 0; + +err: + drm_exec_fini(exec); + return ret; +} +EXPORT_SYMBOL_GPL(drm_gpuva_manager_lock_extra); + +static int +fn_lock_array(struct drm_gpuva_manager *mgr, void *priv, + unsigned int num_fences) +{ + struct { + struct drm_gem_object **objs; + unsigned int num_objs; + } *args = priv; + + return drm_exec_prepare_array(DRM_GPUVA_EXEC(mgr), args->objs, + args->num_objs, num_fences); +} + +/** + * drm_gpuva_manager_lock_array() - lock all dma-resv of all assoiciated BOs + * @mgr: the &drm_gpuva_manager + * @objs: additional &drm_gem_objects to lock + * @num_objs: the number of additional &drm_gem_objects to lock + * @num_fences: the amount of &dma_fences to reserve + * @interruptible: sleep interruptible if waiting + * + * Acquires all dma-resv locks of all &drm_gem_objects the given + * &drm_gpuva_manager contains mappings of, plus the ones given through @objs. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuva_manager_lock_array(struct drm_gpuva_manager *mgr, + struct drm_gem_object **objs, + unsigned int num_objs, + unsigned int num_fences, + bool interruptible) +{ + struct { + struct drm_gem_object **objs; + unsigned int num_objs; + } args; + + args.objs = objs; + args.num_objs = num_objs; + + return drm_gpuva_manager_lock_extra(mgr, fn_lock_array, &args, + num_fences, interruptible); +} +EXPORT_SYMBOL_GPL(drm_gpuva_manager_lock_array); + +/** + * drm_gpuva_manager_validate() - validate all BOs marked as evicted + * @mgr: the &drm_gpuva_manager to validate evicted BOs + * + * Calls the &drm_gpuva_fn_ops.bo_validate callback for all evicted buffer + * objects being mapped in the given &drm_gpuva_manager. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuva_manager_validate(struct drm_gpuva_manager *mgr) +{ + const struct drm_gpuva_fn_ops *ops = mgr->ops; + struct drm_gpuva_gem *vm_bo; + int ret; + + if (unlikely(!ops || !ops->bo_validate)) + return -ENOTSUPP; + + /* At this point we should hold all dma-resv locks of all GEM objects + * associated with this GPU-VM, hence it is safe to walk the list. + */ + list_for_each_entry(vm_bo, &mgr->evict.list, list.entry.evict) { + dma_resv_assert_held(vm_bo->obj->resv); + + ret = ops->bo_validate(vm_bo->obj); + if (ret) + return ret; + } + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gpuva_manager_validate); + +/** + * drm_gpuva_manager_resv_add_fence - add fence to private and all extobj + * dma-resv + * @mgr: the &drm_gpuva_manager to add a fence to + * @fence: fence to add + * @private_usage: private dma-resv usage + * @extobj_usage: extobj dma-resv usage + */ +void +drm_gpuva_manager_resv_add_fence(struct drm_gpuva_manager *mgr, + struct dma_fence *fence, + enum dma_resv_usage private_usage, + enum dma_resv_usage extobj_usage) +{ + struct drm_exec *exec = DRM_GPUVA_EXEC(mgr); + struct drm_gem_object *obj; + unsigned long index; + + drm_exec_for_each_locked_object(exec, index, obj) { + dma_resv_assert_held(obj->resv); + dma_resv_add_fence(obj->resv, fence, + drm_gpuva_is_extobj(mgr, obj) ? + private_usage : extobj_usage); + } +} +EXPORT_SYMBOL_GPL(drm_gpuva_manager_resv_add_fence); + +static struct drm_gpuva_gem * +__drm_gpuva_gem_find(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + struct drm_gpuva_gem *vm_bo; + + drm_gem_gpuva_assert_lock_held(obj); + + drm_gem_for_each_gpuva_gem(vm_bo, obj) + if (vm_bo->mgr == mgr) + return vm_bo; + + return NULL; +} + +/** + * drm_gpuva_gem_create() - create a new instance of struct drm_gpuva_gem + * @mgr: The &drm_gpuva_manager the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @mgr. + * + * If provided by the driver, this function uses the &drm_gpuva_fn_ops + * vm_bo_alloc() callback to allocate. + * + * Returns: a pointer to the &drm_gpuva_gem on success, NULL on failure + */ +struct drm_gpuva_gem * +drm_gpuva_gem_create(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + const struct drm_gpuva_fn_ops *ops = mgr->ops; + struct drm_gpuva_gem *vm_bo; + + if (ops && ops->vm_bo_alloc) + vm_bo = ops->vm_bo_alloc(); + else + vm_bo = kzalloc(sizeof(*vm_bo), GFP_KERNEL); + + if (unlikely(!vm_bo)) + return NULL; + + vm_bo->mgr = mgr; + vm_bo->obj = obj; + + kref_init(&vm_bo->kref); + INIT_LIST_HEAD(&vm_bo->list.gpuva); + INIT_LIST_HEAD(&vm_bo->list.entry.gem); + INIT_LIST_HEAD(&vm_bo->list.entry.evict); + + drm_gem_object_get(obj); + + return vm_bo; +} +EXPORT_SYMBOL_GPL(drm_gpuva_gem_create); + +void +drm_gpuva_gem_destroy(struct kref *kref) +{ + struct drm_gpuva_gem *vm_bo = container_of(kref, struct drm_gpuva_gem, + kref); + const struct drm_gpuva_fn_ops *ops = vm_bo->mgr->ops; + + drm_gem_object_put(vm_bo->obj); + + if (ops && ops->vm_bo_free) + ops->vm_bo_free(vm_bo); + else + kfree(vm_bo); +} +EXPORT_SYMBOL_GPL(drm_gpuva_gem_destroy); + +/** + * drm_gpuva_gem_find() - find the &drm_gpuva_gem for the given + * &drm_gpuva_manager and &drm_gem_object + * @mgr: The &drm_gpuva_manager the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @mgr. + * + * Find the &drm_gpuva_gem representing the combination of the given + * &drm_gpuva_manager and &drm_gem_object. If found, increases the reference + * count of the &drm_gpuva_gem accordingly. + * + * Returns: a pointer to the &drm_gpuva_gem on success, NULL on failure + */ +struct drm_gpuva_gem * +drm_gpuva_gem_find(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + struct drm_gpuva_gem *vm_bo = __drm_gpuva_gem_find(mgr, obj); + + return vm_bo ? drm_gpuva_gem_get(vm_bo) : NULL; +} +EXPORT_SYMBOL_GPL(drm_gpuva_gem_find); + +/** + * drm_gpuva_gem_obtain() - obtains and instance of the &drm_gpuva_gem for the + * given &drm_gpuva_manager and &drm_gem_object + * @mgr: The &drm_gpuva_manager the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @mgr. + * + * Find the &drm_gpuva_gem representing the combination of the given + * &drm_gpuva_manager and &drm_gem_object. If found, increases the reference + * count of the &drm_gpuva_gem accordingly. If not found, allsocates a new + * &drm_gpuva_gem. + * + * Returns: a pointer to the &drm_gpuva_gem on success, an ERR_PTR on failure + */ +struct drm_gpuva_gem * +drm_gpuva_gem_obtain(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + struct drm_gpuva_gem *vm_bo; + + vm_bo = drm_gpuva_gem_find(mgr, obj); + if (vm_bo) + return vm_bo; + + vm_bo = drm_gpuva_gem_create(mgr, obj); + if (!vm_bo) + return ERR_PTR(-ENOMEM); + + return vm_bo; +} +EXPORT_SYMBOL_GPL(drm_gpuva_gem_obtain); + +/** + * drm_gpuva_gem_obtain_prealloc() - obtains and instance of the &drm_gpuva_gem + * for the given &drm_gpuva_manager and &drm_gem_object + * @mgr: The &drm_gpuva_manager the @obj is mapped in. + * @obj: The &drm_gem_object being mapped in the @mgr. + * + * Find the &drm_gpuva_gem representing the combination of the given + * &drm_gpuva_manager and &drm_gem_object. If found, increases the reference + * count of the found &drm_gpuva_gem accordingly, while the @__vm_bo reference + * count is decreased. If not found @__vm_bo is returned. + * + * Returns: a pointer to the found &drm_gpuva_gem or @__vm_bo if no existing + * &drm_gpuva_gem was found + */ +struct drm_gpuva_gem * +drm_gpuva_gem_obtain_prealloc(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj, + struct drm_gpuva_gem *__vm_bo) +{ + struct drm_gpuva_gem *vm_bo; + + vm_bo = drm_gpuva_gem_find(mgr, obj); + if (vm_bo) { + drm_gpuva_gem_put(__vm_bo); + return vm_bo; + } + + return __vm_bo; +} +EXPORT_SYMBOL_GPL(drm_gpuva_gem_obtain_prealloc); + +static int +__drm_gpuva_extobj_insert(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj, + gfp_t gfp) +{ + MA_STATE(mas, &mgr->mt_ext, 0, 0); + union { + struct drm_gem_object *obj; + uintptr_t index; + } gem; + union { + void *ptr; + uintptr_t cnt; + } ref; + int ret = 0; + + gem.obj = obj; + mas_set(&mas, gem.index); + + mas_lock(&mas); + ref.ptr = mas_walk(&mas); + if (ref.ptr) { + ++ref.cnt; + mas_store(&mas, ref.ptr); + } else { + if (unlikely(!gfp)) { + ret = -EINVAL; + goto out; + } + + mas_set(&mas, gem.index); + ref.cnt = 1; + ret = mas_store_gfp(&mas, ref.ptr, gfp); + if (likely(!ret)) + drm_gem_object_get(obj); + } +out: + mas_unlock(&mas); + return ret; +} + +static void +__drm_gpuva_extobj_remove(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + MA_STATE(mas, &mgr->mt_ext, 0, 0); + union { + struct drm_gem_object *obj; + uintptr_t index; + } gem; + union { + void *ptr; + uintptr_t cnt; + } ref; + + gem.obj = obj; + mas_set(&mas, gem.index); + + mas_lock(&mas); + if (unlikely(!(ref.ptr = mas_walk(&mas)))) + goto out; + + if (!--ref.cnt) { + mas_erase(&mas); + drm_gem_object_put(obj); + } else { + mas_store(&mas, ref.ptr); + } +out: + mas_unlock(&mas); +} + +/** + * drm_gpuva_extobj_insert - insert an external &drm_gem_object + * @mgr: the &drm_gpuva_manager to insert into + * @obj: the &drm_gem_object to insert as extobj + * + * Insert a &drm_gem_object into the &drm_gpuva_managers external object tree. + * If the &drm_gem_object already exists in the tree, the reference counter + * of this external object is increased by one. + * + * Drivers should insert the external &drm_gem_object before the dma-fence + * signalling critical section, e.g. when submitting the job, and before + * locking all &drm_gem_objects of a GPU-VM, e.g. with drm_gpuva_manager_lock() + * or its dervates. + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuva_extobj_insert(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + return drm_gpuva_is_extobj(mgr, obj) ? + __drm_gpuva_extobj_insert(mgr, obj, GFP_KERNEL) : 0; + +} +EXPORT_SYMBOL_GPL(drm_gpuva_extobj_insert); + +/** + * drm_gpuva_extobj_get - increase the referecne count of an external + * &drm_gem_object + * @mgr: the &drm_gpuva_manager storing the extobj + * @obj: the &drm_gem_object to representing the extobj + * + * Increases the reference count of the extobj represented by @obj. + * + * Drivers should call this for every &drm_gpuva backed by a &drm_gem_object + * being inserted. + * + * For &drm_gpuva_op_remap operations drivers should make sure to only take an + * additional reference if the re-map operation splits an existing &drm_gpuva + * into two separate ones. + * + * See also drm_gpuva_map_get() and drm_gpuva_remap_get(). + * + * Returns: 0 on success, negative error code on failure. + */ +void +drm_gpuva_extobj_get(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + if (drm_gpuva_is_extobj(mgr, obj)) + WARN(__drm_gpuva_extobj_insert(mgr, obj, 0), + "Can't increase ref-count of non-existent extobj."); +} +EXPORT_SYMBOL_GPL(drm_gpuva_extobj_get); + +/** + * drm_gpuva_extobj_put - decrease the referecne count of an external + * &drm_gem_object + * @mgr: the &drm_gpuva_manager storing the extobj + * @obj: the &drm_gem_object to representing the extobj + * + * Decreases the reference count of the extobj represented by @obj. + * + * Drivers should call this for every &drm_gpuva backed by a &drm_gem_object + * being removed from the GPU VA space. + * + * See also drm_gpuva_unmap_put(). + * + * Returns: 0 on success, negative error code on failure. + */ +void +drm_gpuva_extobj_put(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + if (drm_gpuva_is_extobj(mgr, obj)) + __drm_gpuva_extobj_remove(mgr, obj); +} +EXPORT_SYMBOL_GPL(drm_gpuva_extobj_put); + +/** + * drm_gpuva_gem_evict() - add / remove a &drm_gem_object to / from a + * &drm_gpuva_managers evicted list + * @obj: the &drm_gem_object to add or remove + * @evict: indicates whether the object is evicted + * + * Adds a &drm_gem_object to or removes it from all &drm_gpuva_managers evicted + * list containing a mapping of this &drm_gem_object. + */ +void +drm_gpuva_gem_evict(struct drm_gem_object *obj, bool evict) +{ + struct drm_gpuva_gem *vm_bo; + + /* Required for iterating the GEMs GPUVA GEM list. If no driver specific + * lock has been set, the list is protected with the GEMs dma-resv lock. + */ + drm_gem_gpuva_assert_lock_held(obj); + + /* Required to protect the GPUVA managers evict list against concurrent + * access through drm_gpuva_manager_validate(). Concurrent insertions to + * the evict list through different GEM object evictions are protected + * by the GPUVA managers evict lock. + */ + dma_resv_assert_held(obj->resv); + + drm_gem_for_each_gpuva_gem(vm_bo, obj) { + struct drm_gpuva_manager *mgr = vm_bo->mgr; + + spin_lock(&mgr->evict.lock); + if (evict) + list_add_tail(&vm_bo->list.entry.evict, + &mgr->evict.list); + else + list_del_init(&vm_bo->list.entry.evict); + spin_unlock(&mgr->evict.lock); + } +} +EXPORT_SYMBOL_GPL(drm_gpuva_gem_evict); + static int __drm_gpuva_insert(struct drm_gpuva_manager *mgr, struct drm_gpuva *va) @@ -806,15 +1381,20 @@ EXPORT_SYMBOL_GPL(drm_gpuva_remove); /** * drm_gpuva_link() - link a &drm_gpuva * @va: the &drm_gpuva to link + * @vm_bo: the &drm_gpuva_gem to add the &drm_gpuva to * - * This adds the given &va to the GPU VA list of the &drm_gem_object it is - * associated with. + * This adds the given &va to the GPU VA list of the &drm_gpuva_gem and the + * &drm_gpuva_gem to the &drm_gem_object it is associated with. + * + * For every &drm_gpuva entry added to the &drm_gpuva_gem an additional + * reference of the latter is taken. * * This function expects the caller to protect the GEM's GPUVA list against - * concurrent access using the GEMs dma_resv lock. + * concurrent access using either the GEMs dma_resv lock or a driver specific + * lock set through drm_gem_gpuva_set_lock(). */ void -drm_gpuva_link(struct drm_gpuva *va) +drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuva_gem *vm_bo) { struct drm_gem_object *obj = va->gem.obj; @@ -823,7 +1403,10 @@ drm_gpuva_link(struct drm_gpuva *va) drm_gem_gpuva_assert_lock_held(obj); - list_add_tail(&va->gem.entry, &obj->gpuva.list); + drm_gpuva_gem_get(vm_bo); + list_add_tail(&va->gem.entry, &vm_bo->list.gpuva); + if (list_empty(&vm_bo->list.entry.gem)) + list_add_tail(&vm_bo->list.entry.gem, &obj->gpuva.list); } EXPORT_SYMBOL_GPL(drm_gpuva_link); @@ -834,20 +1417,39 @@ EXPORT_SYMBOL_GPL(drm_gpuva_link); * This removes the given &va from the GPU VA list of the &drm_gem_object it is * associated with. * + * This removes the given &va from the GPU VA list of the &drm_gpuva_gem and + * the &drm_gpuva_gem from the &drm_gem_object it is associated with in case + * this call unlinks the last &drm_gpuva from the &drm_gpuva_gem. + * + * For every &drm_gpuva entry removed from the &drm_gpuva_gem a reference of + * the latter is dropped. + * * This function expects the caller to protect the GEM's GPUVA list against - * concurrent access using the GEMs dma_resv lock. + * concurrent access using either the GEMs dma_resv lock or a driver specific + * lock set through drm_gem_gpuva_set_lock(). */ void drm_gpuva_unlink(struct drm_gpuva *va) { struct drm_gem_object *obj = va->gem.obj; + struct drm_gpuva_gem *vm_bo; if (unlikely(!obj)) return; drm_gem_gpuva_assert_lock_held(obj); + vm_bo = __drm_gpuva_gem_find(va->mgr, obj); + if (WARN(!vm_bo, "GPUVA doesn't seem to be linked.\n")) + return; + list_del_init(&va->gem.entry); + + if (list_empty(&vm_bo->list.gpuva)) { + list_del_init(&vm_bo->list.entry.gem); + list_del_init(&vm_bo->list.entry.evict); + } + drm_gpuva_gem_put(vm_bo); } EXPORT_SYMBOL_GPL(drm_gpuva_unlink); @@ -977,6 +1579,26 @@ drm_gpuva_map(struct drm_gpuva_manager *mgr, } EXPORT_SYMBOL_GPL(drm_gpuva_map); +/** + * drm_gpuva_map_get() - helper to insert a &drm_gpuva according to a + * &drm_gpuva_op_map + * @mgr: the &drm_gpuva_manager + * @va: the &drm_gpuva to insert + * @op: the &drm_gpuva_op_map to initialize @va with + * + * Initializes the @va from the @op and inserts it into the given @mgr and + * increases the reference count of the corresponding extobj. + */ +void +drm_gpuva_map_get(struct drm_gpuva_manager *mgr, + struct drm_gpuva *va, + struct drm_gpuva_op_map *op) +{ + drm_gpuva_map(mgr, va, op); + drm_gpuva_extobj_get(mgr, va->gem.obj); +} +EXPORT_SYMBOL_GPL(drm_gpuva_map_get); + /** * drm_gpuva_remap() - helper to remap a &drm_gpuva according to a * &drm_gpuva_op_remap @@ -992,10 +1614,10 @@ drm_gpuva_remap(struct drm_gpuva *prev, struct drm_gpuva *next, struct drm_gpuva_op_remap *op) { - struct drm_gpuva *curr = op->unmap->va; - struct drm_gpuva_manager *mgr = curr->mgr; + struct drm_gpuva *va = op->unmap->va; + struct drm_gpuva_manager *mgr = va->mgr; - drm_gpuva_remove(curr); + drm_gpuva_remove(va); if (op->prev) { drm_gpuva_init_from_op(prev, op->prev); @@ -1009,6 +1631,31 @@ drm_gpuva_remap(struct drm_gpuva *prev, } EXPORT_SYMBOL_GPL(drm_gpuva_remap); +/** + * drm_gpuva_remap_get() - helper to remap a &drm_gpuva according to a + * &drm_gpuva_op_remap + * @prev: the &drm_gpuva to remap when keeping the start of a mapping + * @next: the &drm_gpuva to remap when keeping the end of a mapping + * @op: the &drm_gpuva_op_remap to initialize @prev and @next with + * + * Removes the currently mapped &drm_gpuva and remaps it using @prev and/or + * @next. Additionally, if the re-map splits the existing &drm_gpuva into two + * separate mappings, increases the reference count of the corresponding extobj. + */ +void +drm_gpuva_remap_get(struct drm_gpuva *prev, + struct drm_gpuva *next, + struct drm_gpuva_op_remap *op) +{ + struct drm_gpuva *va = op->unmap->va; + struct drm_gpuva_manager *mgr = va->mgr; + + drm_gpuva_remap(prev, next, op); + if (op->prev && op->next) + drm_gpuva_extobj_get(mgr, va->gem.obj); +} +EXPORT_SYMBOL_GPL(drm_gpuva_remap_get); + /** * drm_gpuva_unmap() - helper to remove a &drm_gpuva according to a * &drm_gpuva_op_unmap @@ -1023,6 +1670,24 @@ drm_gpuva_unmap(struct drm_gpuva_op_unmap *op) } EXPORT_SYMBOL_GPL(drm_gpuva_unmap); +/** + * drm_gpuva_unmap_put() - helper to remove a &drm_gpuva according to a + * &drm_gpuva_op_unmap + * @op: the &drm_gpuva_op_unmap specifying the &drm_gpuva to remove + * + * Removes the &drm_gpuva associated with the &drm_gpuva_op_unmap and decreases + * the reference count of the corresponding extobj. + */ +void +drm_gpuva_unmap_put(struct drm_gpuva_op_unmap *op) +{ + struct drm_gpuva *va = op->va; + + drm_gpuva_unmap(op); + drm_gpuva_extobj_put(va->mgr, va->gem.obj); +} +EXPORT_SYMBOL_GPL(drm_gpuva_unmap_put); + static int op_map_cb(const struct drm_gpuva_fn_ops *fn, void *priv, u64 addr, u64 range, @@ -1663,6 +2328,7 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr, { struct drm_gpuva_ops *ops; struct drm_gpuva_op *op; + struct drm_gpuva_gem *vm_bo; struct drm_gpuva *va; int ret; @@ -1674,7 +2340,7 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr, INIT_LIST_HEAD(&ops->list); - drm_gem_for_each_gpuva(va, obj) { + drm_gem_for_each_gpuva(va, vm_bo, mgr, obj) { op = gpuva_op_alloc(mgr); if (!op) { ret = -ENOMEM; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index bc9f6aa2f3fe..783ed3ab440d 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -571,7 +571,7 @@ int drm_gem_evict(struct drm_gem_object *obj); * drm_gem_gpuva_init() - initialize the gpuva list of a GEM object * @obj: the &drm_gem_object * - * This initializes the &drm_gem_object's &drm_gpuva list. + * This initializes the &drm_gem_object's &drm_gpuva_gem list. * * Calling this function is only necessary for drivers intending to support the * &drm_driver_feature DRIVER_GEM_GPUVA. @@ -584,28 +584,44 @@ static inline void drm_gem_gpuva_init(struct drm_gem_object *obj) } /** - * drm_gem_for_each_gpuva() - iternator to walk over a list of gpuvas - * @entry__: &drm_gpuva structure to assign to in each iteration step - * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with + * drm_gem_for_each_gpuva_gem() - iterator to walk over a list of &drm_gpuva_gem + * @entry__: &drm_gpuva_gem structure to assign to in each iteration step + * @obj__: the &drm_gem_object the &drm_gpuva_gem to walk are associated with * - * This iterator walks over all &drm_gpuva structures associated with the - * &drm_gpuva_manager. + * This iterator walks over all &drm_gpuva_gem structures associated with the + * &drm_gem_object. */ -#define drm_gem_for_each_gpuva(entry__, obj__) \ - list_for_each_entry(entry__, &(obj__)->gpuva.list, gem.entry) +#define drm_gem_for_each_gpuva_gem(entry__, obj__) \ + list_for_each_entry(entry__, &(obj__)->gpuva.list, list.entry.gem) /** - * drm_gem_for_each_gpuva_safe() - iternator to safely walk over a list of - * gpuvas - * @entry__: &drm_gpuva structure to assign to in each iteration step - * @next__: &next &drm_gpuva to store the next step - * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with + * drm_gem_for_each_gpuva_gem_safe() - iterator to safely walk over a list of + * &drm_gpuva_gem + * @entry__: &drm_gpuva_gemstructure to assign to in each iteration step + * @next__: &next &drm_gpuva_gem to store the next step + * @obj__: the &drm_gem_object the &drm_gpuva_gem to walk are associated with * - * This iterator walks over all &drm_gpuva structures associated with the + * This iterator walks over all &drm_gpuva_gem structures associated with the * &drm_gem_object. It is implemented with list_for_each_entry_safe(), hence * it is save against removal of elements. */ -#define drm_gem_for_each_gpuva_safe(entry__, next__, obj__) \ - list_for_each_entry_safe(entry__, next__, &(obj__)->gpuva.list, gem.entry) +#define drm_gem_for_each_gpuva_gem_safe(entry__, next__, obj__) \ + list_for_each_entry_safe(entry__, next__, &(obj__)->gpuva.list, list.entry.gem) + +/** + * drm_gem_for_each_gpuva() - iterator to walk over a list of &drm_gpuva + * @va__: &drm_gpuva structure to assign to in each iteration step + * @vm_bo__: the &drm_gpuva_gem representing the @mgr__ and @obj__ combination + * @mgr__: the &drm_gpuva_manager the &drm_gpuvas to walk are associated with + * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gpuva_manager and &drm_gem_object. + */ +#define drm_gem_for_each_gpuva(va__, vm_bo__, mgr__, obj__) \ + for (vm_bo__ = drm_gpuva_gem_find(mgr__, obj__), \ + va__ = vm_bo__ ? list_first_entry(&vm_bo__->list.gpuva, typeof(*va__), gem.entry) : NULL; \ + va__ && !list_entry_is_head(va__, &vm_bo__->list.gpuva, gem.entry); \ + va__ = list_next_entry(va__, gem.entry)) #endif /* __DRM_GEM_H__ */ diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuva_mgr.h index ed8d50200cc3..693e2da3f425 100644 --- a/include/drm/drm_gpuva_mgr.h +++ b/include/drm/drm_gpuva_mgr.h @@ -26,12 +26,16 @@ */ #include +#include +#include #include #include #include +#include struct drm_gpuva_manager; +struct drm_gpuva_gem; struct drm_gpuva_fn_ops; /** @@ -140,7 +144,7 @@ struct drm_gpuva { int drm_gpuva_insert(struct drm_gpuva_manager *mgr, struct drm_gpuva *va); void drm_gpuva_remove(struct drm_gpuva *va); -void drm_gpuva_link(struct drm_gpuva *va); +void drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuva_gem *vm_bo); void drm_gpuva_unlink(struct drm_gpuva *va); struct drm_gpuva *drm_gpuva_find(struct drm_gpuva_manager *mgr, @@ -240,15 +244,137 @@ struct drm_gpuva_manager { * @ops: &drm_gpuva_fn_ops providing the split/merge steps to drivers */ const struct drm_gpuva_fn_ops *ops; + + /** + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs + * dma-resv to &drm_exec. + */ + struct drm_gem_object d_obj; + + /** + * @resv: the &dma_resv for &drm_gem_objects mapped in this GPU VA + * space + */ + struct dma_resv *resv; + + /** + * @exec: the &drm_exec helper to lock external &drm_gem_objects + */ + struct drm_exec exec; + + /** + * @mt_ext: &maple_tree storing external &drm_gem_objects + */ + struct maple_tree mt_ext; + + /** + * @evict: structure holding the evict list and evict list lock + */ + struct { + /** + * @list: &list_head storing &drm_gem_objects currently being + * evicted + */ + struct list_head list; + + /** + * @lock: spinlock to protect the evict list against concurrent + * insertion / removal of different &drm_gpuva_gems + */ + spinlock_t lock; + } evict; }; void drm_gpuva_manager_init(struct drm_gpuva_manager *mgr, + struct drm_device *drm, const char *name, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, const struct drm_gpuva_fn_ops *ops); void drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr); +/** + * DRM_GPUVA_EXEC - returns the &drm_gpuva_managers &drm_exec instance + * @mgr: the &drm_gpuva_managers to return the &drm_exec instance for + */ +#define DRM_GPUVA_EXEC(mgr) &(mgr)->exec + +int drm_gpuva_manager_lock_extra(struct drm_gpuva_manager *mgr, + int (*fn)(struct drm_gpuva_manager *mgr, + void *priv, unsigned int num_fences), + void *priv, + unsigned int num_fences, + bool interruptible); + +int drm_gpuva_manager_lock_array(struct drm_gpuva_manager *mgr, + struct drm_gem_object **objs, + unsigned int num_objs, + unsigned int num_fences, + bool interruptible); + +/** + * drm_gpuva_manager_lock() - lock all dma-resv of all assoiciated BOs + * @mgr: the &drm_gpuva_manager + * @num_fences: the amount of &dma_fences to reserve + * @interruptible: sleep interruptible if waiting + * + * Acquires all dma-resv locks of all &drm_gem_objects the given + * &drm_gpuva_manager contains mappings of. + * + * Returns: 0 on success, negative error code on failure. + */ +static inline int +drm_gpuva_manager_lock(struct drm_gpuva_manager *mgr, + unsigned int num_fences, + bool interruptible) +{ + return drm_gpuva_manager_lock_extra(mgr, NULL, NULL, num_fences, + interruptible); +} + +/** + * drm_gpuva_manager_lock() - lock all dma-resv of all assoiciated BOs + * @mgr: the &drm_gpuva_manager + * + * Releases all dma-resv locks of all &drm_gem_objects previously acquired + * through drm_gpuva_manager_lock() or its variants. + * + * Returns: 0 on success, negative error code on failure. + */ +static inline void +drm_gpuva_manager_unlock(struct drm_gpuva_manager *mgr) +{ + drm_exec_fini(&mgr->exec); +} + +int drm_gpuva_manager_validate(struct drm_gpuva_manager *mgr); +void drm_gpuva_manager_resv_add_fence(struct drm_gpuva_manager *mgr, + struct dma_fence *fence, + enum dma_resv_usage private_usage, + enum dma_resv_usage extobj_usage); + +int drm_gpuva_extobj_insert(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj); +void drm_gpuva_extobj_get(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj); +void drm_gpuva_extobj_put(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj); + +/** + * drm_gpuva_is_extobj() - indicates whether the given &drm_gem_object is an + * external object + * @mgr: the &drm_gpuva_manager to check + * @obj: the &drm_gem_object to check + * + * Returns: true if the &drm_gem_object &dma_resv differs from the + * &drm_gpuva_managers &dma_resv, false otherwise + */ +static inline bool drm_gpuva_is_extobj(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + return obj && obj->resv != mgr->resv; +} + static inline struct drm_gpuva * __drm_gpuva_next(struct drm_gpuva *va) { @@ -327,6 +453,138 @@ __drm_gpuva_next(struct drm_gpuva *va) #define drm_gpuva_for_each_va_safe(va__, next__, mgr__) \ list_for_each_entry_safe(va__, next__, &(mgr__)->rb.list, rb.entry) +/** + * struct drm_gpuva_gem - structure representing a &drm_gpuva_manager and + * &drm_gem_object combination + * + * This structure is an abstraction representing a &drm_gpuva_manager and + * &drm_gem_object combination. It serves as an indirection to accelerate + * iterating all &drm_gpuvas within a &drm_gpuva_manager backed by the same + * &drm_gem_object. + * + * Furthermore it is used cache evicted GEM objects for a certain GPU-VM to + * accelerate validation. + * + * Typically, drivers want to create an instance of a struct drm_gpuva_gem once + * a GEM object is mapped first in a GPU-VM and release the instance once the + * last mapping of the GEM object in this GPU-VM is unmapped. + */ +struct drm_gpuva_gem { + + /** + * @mgr: The &drm_gpuva_manager the @obj is mapped in. + */ + struct drm_gpuva_manager *mgr; + + /** + * @obj: The &drm_gem_object being mapped in the @mgr. + */ + struct drm_gem_object *obj; + + /** + * @kref: The reference count for this &drm_gpuva_gem. + */ + struct kref kref; + + /** + * @list: Structure containing all &list_heads. + */ + struct { + /** + * @gpuva: The list of linked &drm_gpuvas. + */ + struct list_head gpuva; + + /** + * @entry: Structure containing all &list_heads serving as + * entry. + */ + struct { + /** + * @gem: List entry to attach to the &drm_gem_objects + * gpuva list. + */ + struct list_head gem; + + /** + * @evict: List entry to attach to the + * &drm_gpuva_managers evict list. + */ + struct list_head evict; + } entry; + } list; +}; + +struct drm_gpuva_gem * +drm_gpuva_gem_obtain(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj); +struct drm_gpuva_gem * +drm_gpuva_gem_obtain_prealloc(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj, + struct drm_gpuva_gem *__vm_bo); + +struct drm_gpuva_gem * +drm_gpuva_gem_find(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj); + +void drm_gpuva_gem_evict(struct drm_gem_object *obj, bool evict); + +struct drm_gpuva_gem * +drm_gpuva_gem_create(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj); +void drm_gpuva_gem_destroy(struct kref *kref); + +/** + * drm_gpuva_gem_get() - acquire a struct drm_gpuva_gem reference + * @vm_bo: the &drm_gpuva_gem to acquire the reference of + * + * This function acquires an additional reference to @vm_bo. It is illegal to + * call this without already holding a reference. No locks required. + */ +static inline struct drm_gpuva_gem * +drm_gpuva_gem_get(struct drm_gpuva_gem *vm_bo) +{ + kref_get(&vm_bo->kref); + return vm_bo; +} + +/** + * drm_gpuva_gem_put() - drop a struct drm_gpuva_gem reference + * @vm_bo: the &drm_gpuva_gem to release the reference of + * + * This releases a reference to @vm_bo. + */ +static inline void +drm_gpuva_gem_put(struct drm_gpuva_gem *vm_bo) +{ + kref_put(&vm_bo->kref, drm_gpuva_gem_destroy); +} + +/** + * drm_gpuva_gem_for_each_va() - iterator to walk over a list of &drm_gpuva + * @va__: &drm_gpuva structure to assign to in each iteration step + * @vm_bo__: the &drm_gpuva_gem the &drm_gpuva to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gpuva_gem. + */ +#define drm_gpuva_gem_for_each_va(va__, vm_bo__) \ + list_for_each_entry(va__, &(vm_bo)->list.gpuva, gem.entry) + +/** + * drm_gpuva_gem_for_each_va_safe() - iterator to safely walk over a list of + * &drm_gpuva + * @va__: &drm_gpuva structure to assign to in each iteration step + * @next__: &next &drm_gpuva to store the next step + * @vm_bo__: the &drm_gpuva_gem the &drm_gpuva to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gpuva_gem. It is implemented with list_for_each_entry_safe(), hence + * it is save against removal of elements. + */ +#define drm_gpuva_gem_for_each_va_safe(va__, next__, vm_bo__) \ + list_for_each_entry_safe(va__, next__, &(vm_bo)->list.gpuva, gem.entry) + /** * enum drm_gpuva_op_type - GPU VA operation type * @@ -641,6 +899,30 @@ struct drm_gpuva_fn_ops { */ void (*op_free)(struct drm_gpuva_op *op); + /** + * @vm_bo_alloc: called when the &drm_gpuva_manager allocates + * a struct drm_gpuva_gem + * + * Some drivers may want to embed struct drm_gpuva_gem into driver + * specific structures. By implementing this callback drivers can + * allocate memory accordingly. + * + * This callback is optional. + */ + struct drm_gpuva_gem *(*vm_bo_alloc)(void); + + /** + * @vm_bo_free: called when the &drm_gpuva_manager frees a + * struct drm_gpuva_gem + * + * Some drivers may want to embed struct drm_gpuva_gem into driver + * specific structures. By implementing this callback drivers can + * free the previously allocated memory accordingly. + * + * This callback is optional. + */ + void (*vm_bo_free)(struct drm_gpuva_gem *vm_bo); + /** * @sm_step_map: called from &drm_gpuva_sm_map to finally insert the * mapping once all previous steps were completed @@ -684,6 +966,17 @@ struct drm_gpuva_fn_ops { * used. */ int (*sm_step_unmap)(struct drm_gpuva_op *op, void *priv); + + /** + * @bo_validate: called from drm_gpuva_manager_validate() + * + * Drivers receive this callback for every evicted &drm_gem_object being + * mapped in the corresponding &drm_gpuva_manager. + * + * Typically, drivers would call their driver specific variant of + * ttm_bo_validate() from within this callback. + */ + int (*bo_validate)(struct drm_gem_object *obj); }; int drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv, @@ -696,11 +989,18 @@ int drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv, void drm_gpuva_map(struct drm_gpuva_manager *mgr, struct drm_gpuva *va, struct drm_gpuva_op_map *op); +void drm_gpuva_map_get(struct drm_gpuva_manager *mgr, + struct drm_gpuva *va, + struct drm_gpuva_op_map *op); void drm_gpuva_remap(struct drm_gpuva *prev, struct drm_gpuva *next, struct drm_gpuva_op_remap *op); +void drm_gpuva_remap_get(struct drm_gpuva *prev, + struct drm_gpuva *next, + struct drm_gpuva_op_remap *op); void drm_gpuva_unmap(struct drm_gpuva_op_unmap *op); +void drm_gpuva_unmap_put(struct drm_gpuva_op_unmap *op); #endif /* __DRM_GPUVA_MGR_H__ */ From patchwork Sun Aug 20 21:53:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 13358904 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 35827EE49A8 for ; Sun, 20 Aug 2023 21:54:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6891D10E164; Sun, 20 Aug 2023 21:54:27 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4B90910E16C for ; Sun, 20 Aug 2023 21:54:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692568462; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ht3skAfRITnKNoVxIYGZtZJPcArvJuctUZOpE2xcjkQ=; b=MUWfyaYQDEvnwSH6GIIKAwsAP+2J+50fv8IFUkj4LwN//QeLVg7TT3XEpMLntJNhmH+mEw CYziEBgMMowNBluyRsggHRn0TlSxsBlwxKKoVHjy9cr76E+nsiOTAUMzkeh/cFpxwez5hY xbZvwQ2OnaD9WeNWzwQUbg1TDO9rMBY= Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com [209.85.218.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-613-U4oVOkCgMZCchJDrBBlGjg-1; Sun, 20 Aug 2023 17:54:20 -0400 X-MC-Unique: U4oVOkCgMZCchJDrBBlGjg-1 Received: by mail-ej1-f70.google.com with SMTP id a640c23a62f3a-993831c639aso169428166b.2 for ; Sun, 20 Aug 2023 14:54:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692568460; x=1693173260; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ht3skAfRITnKNoVxIYGZtZJPcArvJuctUZOpE2xcjkQ=; b=TLfmkFxSbkZ9hNgvRg4m15VXnjtDSFJGbp5l15ZCSJiHEqsE+kQvuzh6B19lW62mNr VvxdZ7GFxeJrSJUL+W8G0wigHjBarQICTC2cluZmxFYsJQ+e1BUxHPopuC0EP3LeGF0p /KD9x2b3fwMwj2rDN8eC45nSEb2KwD6ANz5E1HhzNJdu/bR+4Bs2ytBPyrh/Pa99jgkK Z7Mrj5j4nI/U8xdkhPfu4BDl0Fp2fseNJ7GnzrSyJWyhGZY++7XIKqWP1Ff95WD9DILh U8R3R88fJbhZg+Rt/OiVxNYxIhpyJVXTKMXfQn0EPq6kZ5yFieO7drokG92lFSmL2AxC /jrw== X-Gm-Message-State: AOJu0YwzEi9Bz0Kmfd0Bpm6IHUEmkfw6B+8HTcIw+cbfe0RisLxRYXP3 jftfR6iWshmO3TJgfMMsUBcFZMsGLBJS8gh9h0rUYN036+WXb49f1nyf/nNZdQHr6ISThXNIfTJ 6VTzaAmK6xEWXwr4cAH8Jgq1Y38id X-Received: by 2002:a17:906:8a44:b0:993:f744:d230 with SMTP id gx4-20020a1709068a4400b00993f744d230mr3582442ejc.16.1692568459700; Sun, 20 Aug 2023 14:54:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFS1ePlNjanjdL9qWHop92EoA2i4yVg8lqvUNWoB/pMbl7kiU/78Ljn1mkaybrloVCtaCQfRQ== X-Received: by 2002:a17:906:8a44:b0:993:f744:d230 with SMTP id gx4-20020a1709068a4400b00993f744d230mr3582430ejc.16.1692568459352; Sun, 20 Aug 2023 14:54:19 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id ga10-20020a170906b84a00b00992b2c55c67sm5301351ejb.156.2023.08.20.14.54.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Aug 2023 14:54:18 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com, bskeggs@redhat.com, Liam.Howlett@oracle.com Subject: [PATCH drm-misc-next 3/3] drm/nouveau: gpuva mgr dma-resv/extobj handling, GEM validation Date: Sun, 20 Aug 2023 23:53:10 +0200 Message-ID: <20230820215320.4187-4-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230820215320.4187-1-dakr@redhat.com> References: <20230820215320.4187-1-dakr@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nouveau@lists.freedesktop.org, Danilo Krummrich , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Make use of the DRM GPUVA managers GPU-VM common dma-resv, external GEM object tracking, dma-resv locking, evicted GEM object tracking and validation features. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_bo.c | 4 +- drivers/gpu/drm/nouveau/nouveau_exec.c | 51 ++----- drivers/gpu/drm/nouveau/nouveau_gem.c | 4 +- drivers/gpu/drm/nouveau/nouveau_sched.h | 2 - drivers/gpu/drm/nouveau/nouveau_uvmm.c | 191 +++++++++++++++++------- 5 files changed, 150 insertions(+), 102 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 19cab37ac69c..64f50adb2856 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1060,17 +1060,18 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, { struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct nouveau_bo *nvbo = nouveau_bo(bo); + struct drm_gem_object *obj = &bo->base; struct ttm_resource *old_reg = bo->resource; struct nouveau_drm_tile *new_tile = NULL; int ret = 0; - if (new_reg->mem_type == TTM_PL_TT) { ret = nouveau_ttm_tt_bind(bo->bdev, bo->ttm, new_reg); if (ret) return ret; } + drm_gpuva_gem_evict(obj, evict); nouveau_bo_move_ntfy(bo, new_reg); ret = ttm_bo_wait_ctx(bo, ctx); if (ret) @@ -1135,6 +1136,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, out_ntfy: if (ret) { nouveau_bo_move_ntfy(bo, bo->resource); + drm_gpuva_gem_evict(obj, !evict); } return ret; } diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c index 0f927adda4ed..fadb20824b26 100644 --- a/drivers/gpu/drm/nouveau/nouveau_exec.c +++ b/drivers/gpu/drm/nouveau/nouveau_exec.c @@ -1,7 +1,5 @@ // SPDX-License-Identifier: MIT -#include - #include "nouveau_drv.h" #include "nouveau_gem.h" #include "nouveau_mem.h" @@ -91,9 +89,6 @@ nouveau_exec_job_submit(struct nouveau_job *job) struct nouveau_exec_job *exec_job = to_nouveau_exec_job(job); struct nouveau_cli *cli = job->cli; struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(cli); - struct drm_exec *exec = &job->exec; - struct drm_gem_object *obj; - unsigned long index; int ret; ret = nouveau_fence_new(&exec_job->fence); @@ -101,52 +96,30 @@ nouveau_exec_job_submit(struct nouveau_job *job) return ret; nouveau_uvmm_lock(uvmm); - drm_exec_init(exec, DRM_EXEC_INTERRUPTIBLE_WAIT | - DRM_EXEC_IGNORE_DUPLICATES); - drm_exec_until_all_locked(exec) { - struct drm_gpuva *va; - - drm_gpuva_for_each_va(va, &uvmm->umgr) { - if (unlikely(va == &uvmm->umgr.kernel_alloc_node)) - continue; - - ret = drm_exec_prepare_obj(exec, va->gem.obj, 1); - drm_exec_retry_on_contention(exec); - if (ret) - goto err_uvmm_unlock; - } + ret = drm_gpuva_manager_lock(&uvmm->umgr, 1, false); + if (ret) { + nouveau_uvmm_unlock(uvmm); + return ret; } nouveau_uvmm_unlock(uvmm); - drm_exec_for_each_locked_object(exec, index, obj) { - struct nouveau_bo *nvbo = nouveau_gem_object(obj); - - ret = nouveau_bo_validate(nvbo, true, false); - if (ret) - goto err_exec_fini; + ret = drm_gpuva_manager_validate(&uvmm->umgr); + if (ret) { + drm_gpuva_manager_unlock(&uvmm->umgr); + return ret; } return 0; - -err_uvmm_unlock: - nouveau_uvmm_unlock(uvmm); -err_exec_fini: - drm_exec_fini(exec); - return ret; - } static void nouveau_exec_job_armed_submit(struct nouveau_job *job) { - struct drm_exec *exec = &job->exec; - struct drm_gem_object *obj; - unsigned long index; - - drm_exec_for_each_locked_object(exec, index, obj) - dma_resv_add_fence(obj->resv, job->done_fence, job->resv_usage); + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); - drm_exec_fini(exec); + drm_gpuva_manager_resv_add_fence(&uvmm->umgr, job->done_fence, + job->resv_usage, job->resv_usage); + drm_gpuva_manager_unlock(&uvmm->umgr); } static struct dma_fence * diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index f39360870c70..dec34a88f8b2 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -111,7 +111,7 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct drm_file *file_priv) if (vmm->vmm.object.oclass < NVIF_CLASS_VMM_NV50) return 0; - if (nvbo->no_share && uvmm && &uvmm->resv != nvbo->bo.base.resv) + if (nvbo->no_share && uvmm && uvmm->umgr.resv != nvbo->bo.base.resv) return -EPERM; ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); @@ -245,7 +245,7 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain, if (unlikely(!uvmm)) return -EINVAL; - resv = &uvmm->resv; + resv = uvmm->umgr.resv; } if (!(domain & (NOUVEAU_GEM_DOMAIN_VRAM | NOUVEAU_GEM_DOMAIN_GART))) diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.h b/drivers/gpu/drm/nouveau/nouveau_sched.h index 27ac19792597..ccedc80685b3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.h +++ b/drivers/gpu/drm/nouveau/nouveau_sched.h @@ -5,7 +5,6 @@ #include -#include #include #include "nouveau_drv.h" @@ -54,7 +53,6 @@ struct nouveau_job { struct drm_file *file_priv; struct nouveau_cli *cli; - struct drm_exec exec; enum dma_resv_usage resv_usage; struct dma_fence *done_fence; diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index 3a1e8538f205..ce1975cca8a9 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -71,6 +71,7 @@ struct bind_job_op { u32 handle; u64 offset; struct drm_gem_object *obj; + struct drm_gpuva_gem *vm_bo; } gem; struct nouveau_uvma_region *reg; @@ -436,8 +437,10 @@ nouveau_uvma_region_complete(struct nouveau_uvma_region *reg) static void op_map_prepare_unwind(struct nouveau_uvma *uvma) { + struct drm_gpuva *va = &uvma->va; nouveau_uvma_gem_put(uvma); - drm_gpuva_remove(&uvma->va); + drm_gpuva_remove(va); + drm_gpuva_extobj_put(va->mgr, va->gem.obj); nouveau_uvma_free(uvma); } @@ -445,6 +448,7 @@ static void op_unmap_prepare_unwind(struct drm_gpuva *va) { drm_gpuva_insert(va->mgr, va); + drm_gpuva_extobj_get(va->mgr, va->gem.obj); } static void @@ -466,14 +470,17 @@ nouveau_uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm, break; case DRM_GPUVA_OP_REMAP: { struct drm_gpuva_op_remap *r = &op->remap; + struct drm_gpuva *va = r->unmap->va; + drm_gpuva_extobj_get(va->mgr, va->gem.obj); if (r->next) op_map_prepare_unwind(new->next); if (r->prev) op_map_prepare_unwind(new->prev); - op_unmap_prepare_unwind(r->unmap->va); + op_unmap_prepare_unwind(va); + drm_gpuva_extobj_put(va->mgr, va->gem.obj); break; } case DRM_GPUVA_OP_UNMAP: @@ -589,7 +596,7 @@ op_map_prepare(struct nouveau_uvmm *uvmm, uvma->region = args->region; uvma->kind = args->kind; - drm_gpuva_map(&uvmm->umgr, &uvma->va, op); + drm_gpuva_map_get(&uvmm->umgr, &uvma->va, op); /* Keep a reference until this uvma is destroyed. */ nouveau_uvma_gem_get(uvma); @@ -601,7 +608,7 @@ op_map_prepare(struct nouveau_uvmm *uvmm, static void op_unmap_prepare(struct drm_gpuva_op_unmap *u) { - drm_gpuva_unmap(u); + drm_gpuva_unmap_put(u); } static int @@ -632,6 +639,7 @@ nouveau_uvmm_sm_prepare(struct nouveau_uvmm *uvmm, goto unwind; } } + break; } case DRM_GPUVA_OP_REMAP: { @@ -644,6 +652,7 @@ nouveau_uvmm_sm_prepare(struct nouveau_uvmm *uvmm, u64 urange = va->va.range; u64 uend = ustart + urange; + drm_gpuva_extobj_get(va->mgr, va->gem.obj); op_unmap_prepare(r->unmap); if (r->prev) { @@ -668,6 +677,7 @@ nouveau_uvmm_sm_prepare(struct nouveau_uvmm *uvmm, if (args) vmm_get_end = ustart; } + drm_gpuva_extobj_put(va->mgr, va->gem.obj); if (args && (r->prev && r->next)) vmm_get_start = vmm_get_end = 0; @@ -1112,22 +1122,34 @@ bind_validate_region(struct nouveau_job *job) } static void -bind_link_gpuvas(struct drm_gpuva_ops *ops, struct nouveau_uvma_prealloc *new) +bind_link_gpuvas(struct bind_job_op *bop) { + struct nouveau_uvma_prealloc *new = &bop->new; + struct drm_gpuva_gem *vm_bo = bop->gem.vm_bo; + struct drm_gpuva_ops *ops = bop->ops; struct drm_gpuva_op *op; drm_gpuva_for_each_op(op, ops) { switch (op->op) { case DRM_GPUVA_OP_MAP: - drm_gpuva_link(&new->map->va); + drm_gpuva_link(&new->map->va, vm_bo); break; - case DRM_GPUVA_OP_REMAP: + case DRM_GPUVA_OP_REMAP: { + struct drm_gpuva *va = op->remap.unmap->va; + struct drm_gpuva_gem *vm_bo; + + vm_bo = drm_gpuva_gem_find(va->mgr, va->gem.obj); + BUG_ON(!vm_bo); + if (op->remap.prev) - drm_gpuva_link(&new->prev->va); + drm_gpuva_link(&new->prev->va, vm_bo); if (op->remap.next) - drm_gpuva_link(&new->next->va); - drm_gpuva_unlink(op->remap.unmap->va); + drm_gpuva_link(&new->next->va, vm_bo); + drm_gpuva_unlink(va); + + drm_gpuva_gem_put(vm_bo); break; + } case DRM_GPUVA_OP_UNMAP: drm_gpuva_unlink(op->unmap.va); break; @@ -1137,22 +1159,72 @@ bind_link_gpuvas(struct drm_gpuva_ops *ops, struct nouveau_uvma_prealloc *new) } } +static int +bind_lock_extra(struct drm_gpuva_manager *mgr, void *priv, + unsigned int num_fences) +{ + struct nouveau_uvmm_bind_job *bind_job = priv; + struct bind_job_op *op; + int ret; + + list_for_each_op(op, &bind_job->ops) { + struct drm_gpuva_op *va_op; + + if (IS_ERR_OR_NULL(op->ops)) + continue; + + drm_gpuva_for_each_op(va_op, op->ops) { + struct drm_gem_object *obj = op_gem_obj(va_op); + + if (unlikely(!obj)) + continue; + + if (va_op->op != DRM_GPUVA_OP_UNMAP) + continue; + + ret = drm_exec_prepare_obj(DRM_GPUVA_EXEC(mgr), obj, + num_fences); + if (ret) + return ret; + } + } + + return 0; +} + static int nouveau_uvmm_bind_job_submit(struct nouveau_job *job) { struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); struct nouveau_uvmm_bind_job *bind_job = to_uvmm_bind_job(job); struct nouveau_sched_entity *entity = job->entity; - struct drm_exec *exec = &job->exec; struct bind_job_op *op; int ret; list_for_each_op(op, &bind_job->ops) { if (op->op == OP_MAP) { - op->gem.obj = drm_gem_object_lookup(job->file_priv, - op->gem.handle); - if (!op->gem.obj) + struct drm_gem_object *obj; + + obj = drm_gem_object_lookup(job->file_priv, + op->gem.handle); + if (!obj) return -ENOENT; + + dma_resv_lock(obj->resv, NULL); + op->gem.vm_bo = drm_gpuva_gem_obtain(&uvmm->umgr, obj); + dma_resv_unlock(obj->resv); + if (IS_ERR(op->gem.vm_bo)) { + drm_gem_object_put(obj); + return PTR_ERR(op->gem.vm_bo); + } + + ret = drm_gpuva_extobj_insert(&uvmm->umgr, obj); + if (ret) { + drm_gem_object_put(obj); + return ret; + } + + op->gem.obj = obj; } ret = bind_validate_op(job, op); @@ -1286,30 +1358,10 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) } } - drm_exec_init(exec, DRM_EXEC_INTERRUPTIBLE_WAIT | - DRM_EXEC_IGNORE_DUPLICATES); - drm_exec_until_all_locked(exec) { - list_for_each_op(op, &bind_job->ops) { - struct drm_gpuva_op *va_op; - - if (IS_ERR_OR_NULL(op->ops)) - continue; - - drm_gpuva_for_each_op(va_op, op->ops) { - struct drm_gem_object *obj = op_gem_obj(va_op); - - if (unlikely(!obj)) - continue; - - ret = drm_exec_prepare_obj(exec, obj, 1); - drm_exec_retry_on_contention(exec); - if (ret) { - op = list_last_op(&bind_job->ops); - goto unwind; - } - } - } - } + ret = drm_gpuva_manager_lock_extra(&uvmm->umgr, bind_lock_extra, + bind_job, 1, false); + if (ret) + goto unwind_continue; list_for_each_op(op, &bind_job->ops) { struct drm_gpuva_op *va_op; @@ -1363,7 +1415,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) case OP_UNMAP_SPARSE: case OP_MAP: case OP_UNMAP: - bind_link_gpuvas(op->ops, &op->new); + bind_link_gpuvas(op); break; default: break; @@ -1409,21 +1461,18 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job) } nouveau_uvmm_unlock(uvmm); - drm_exec_fini(exec); + drm_gpuva_manager_unlock(&uvmm->umgr); return ret; } static void nouveau_uvmm_bind_job_armed_submit(struct nouveau_job *job) { - struct drm_exec *exec = &job->exec; - struct drm_gem_object *obj; - unsigned long index; - - drm_exec_for_each_locked_object(exec, index, obj) - dma_resv_add_fence(obj->resv, job->done_fence, job->resv_usage); + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); - drm_exec_fini(exec); + drm_gpuva_manager_resv_add_fence(&uvmm->umgr, job->done_fence, + job->resv_usage, job->resv_usage); + drm_gpuva_manager_unlock(&uvmm->umgr); } static struct dma_fence * @@ -1510,8 +1559,16 @@ nouveau_uvmm_bind_job_free_work_fn(struct work_struct *work) if (!IS_ERR_OR_NULL(op->ops)) drm_gpuva_ops_free(&uvmm->umgr, op->ops); - if (obj) + if (!IS_ERR_OR_NULL(op->gem.vm_bo)) { + dma_resv_lock(obj->resv, NULL); + drm_gpuva_gem_put(op->gem.vm_bo); + dma_resv_unlock(obj->resv); + } + + if (obj) { + drm_gpuva_extobj_put(&uvmm->umgr, obj); drm_gem_object_put(obj); + } } spin_lock(&entity->job.list.lock); @@ -1775,15 +1832,18 @@ void nouveau_uvmm_bo_map_all(struct nouveau_bo *nvbo, struct nouveau_mem *mem) { struct drm_gem_object *obj = &nvbo->bo.base; + struct drm_gpuva_gem *vm_bo; struct drm_gpuva *va; dma_resv_assert_held(obj->resv); - drm_gem_for_each_gpuva(va, obj) { - struct nouveau_uvma *uvma = uvma_from_va(va); + drm_gem_for_each_gpuva_gem(vm_bo, obj) { + drm_gpuva_gem_for_each_va(va, vm_bo) { + struct nouveau_uvma *uvma = uvma_from_va(va); - nouveau_uvma_map(uvma, mem); - drm_gpuva_invalidate(va, false); + nouveau_uvma_map(uvma, mem); + drm_gpuva_invalidate(va, false); + } } } @@ -1791,18 +1851,33 @@ void nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo) { struct drm_gem_object *obj = &nvbo->bo.base; + struct drm_gpuva_gem *vm_bo; struct drm_gpuva *va; dma_resv_assert_held(obj->resv); - drm_gem_for_each_gpuva(va, obj) { - struct nouveau_uvma *uvma = uvma_from_va(va); + drm_gem_for_each_gpuva_gem(vm_bo, obj) { + drm_gpuva_gem_for_each_va(va, vm_bo) { + struct nouveau_uvma *uvma = uvma_from_va(va); - nouveau_uvma_unmap(uvma); - drm_gpuva_invalidate(va, true); + nouveau_uvma_unmap(uvma); + drm_gpuva_invalidate(va, true); + } } } +static int +nouveau_uvmm_bo_validate(struct drm_gem_object *obj) +{ + struct nouveau_bo *nvbo = nouveau_gem_object(obj); + + return nouveau_bo_validate(nvbo, true, false); +} + +static const struct drm_gpuva_fn_ops nouveau_uvmm_gpuva_ops = { + .bo_validate = nouveau_uvmm_bo_validate, +}; + int nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, u64 kernel_managed_addr, u64 kernel_managed_size) @@ -1835,11 +1910,11 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, uvmm->kernel_managed_addr = kernel_managed_addr; uvmm->kernel_managed_size = kernel_managed_size; - drm_gpuva_manager_init(&uvmm->umgr, cli->name, + drm_gpuva_manager_init(&uvmm->umgr, cli->drm->dev, cli->name, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, kernel_managed_addr, kernel_managed_size, - NULL); + &nouveau_uvmm_gpuva_ops); ret = nvif_vmm_ctor(&cli->mmu, "uvmm", cli->vmm.vmm.object.oclass, RAW,