From patchwork Tue Jan 21 15:15:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philipp Stanner X-Patchwork-Id: 13946416 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42AA11F4299; Tue, 21 Jan 2025 15:16:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737472567; cv=none; b=KKRvSo4f2tjLSGVakKpMAxwzHg4k+GU8HYx5bAC2uX4JK034B1zcx9sLYcuvzR7XlDVhhystK8cXUAa2EJrgG1mwa/awpqorNEvL/CCSVFrvDS3PDeZxelJqwab+8Wkjk1Zz58YTK5H3BNtuuxCSq8MGnORzRvf9RwOPG2Qm0t8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737472567; c=relaxed/simple; bh=5ziMMlCwm2y5y3wsVofZNp7cHjP4Bh/rr6cusB6ZzIw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NGML81+GD3Fr8gxqjgxFNSPyTnGOoOv0FoMnzv8yBGe2I3dsNnoz4MGDOtwzyi+tgipEXmmuRKQeKD/5Q/H3CbzJADtHxnTI3OLCKxfSc9WpbM8PkQZRe0L993eVQUgKXXSxE8N2IGgrEVqdMt3r2fQGmgxNG4cNSZGm4WoTHM0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=euOtlJEE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="euOtlJEE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8023FC4CEE3; Tue, 21 Jan 2025 15:16:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737472566; bh=5ziMMlCwm2y5y3wsVofZNp7cHjP4Bh/rr6cusB6ZzIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=euOtlJEEuI0/BzXk7MZnJVgcHqKdHlsM59Qn3Omh8hmGRZaVy5u5IUah8R2etN2xe 4uJbCZ2aB0aX7iGIRozTPvGGxjIOl3J+QX+HQFl8g4oN13eUH2u9i5zLsF0yEaSCS6 cH0CQRZqQY9xB8nltFRhJUrvR2L2XBILxMqFVQSoKXOurojm5CNVKn9ue/z80yQokc ExYeu7IoHN1l05XDLIzowuUwM4tJDuJtZ2MPBbu0fKbnk9WAhxmmo0eo++/gtKzMfC tO4hchdPUoVWW81UvkvOynvbmpWeETldOgQTfeTZrwwG5F0q+jMbcjE9RYwdU0E6kX D6eZ0Ty6KqW4Q== From: Philipp Stanner To: Matthew Brost , Danilo Krummrich , Philipp Stanner , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v2 1/3] drm/sched: Document run_job() refcount hazard Date: Tue, 21 Jan 2025 16:15:43 +0100 Message-ID: <20250121151544.44949-3-phasta@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250121151544.44949-2-phasta@kernel.org> References: <20250121151544.44949-2-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Philipp Stanner drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler. That fence is signalled by the driver once the hardware completed the associated job. The scheduler does not increment the reference count on that fence, but implicitly expects to inherit this fence from run_job(). This is relatively subtle and prone to misunderstandings. This implies that, to keep a reference for itself, a driver needs to call dma_fence_get() in addition to dma_fence_init() in that callback. It's further complicated by the fact that the scheduler even decrements the refcount in drm_sched_run_job_work() since it created a new reference in drm_sched_fence_scheduled(). It does, however, still use its pointer to the fence after calling dma_fence_put() - which is safe because of the aforementioned new reference, but actually still violates the refcounting rules. Move the call to dma_fence_put() to the position behind the last usage of the fence. Document the necessity to increment the reference count in drm_sched_backend_ops.run_job(). Suggested-by: Danilo Krummrich Signed-off-by: Philipp Stanner Reviewed-by: Danilo Krummrich --- drivers/gpu/drm/scheduler/sched_main.c | 5 ++--- include/drm/gpu_scheduler.h | 19 +++++++++++++++---- 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 57da84908752..7e69ebc09513 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1218,15 +1218,14 @@ static void drm_sched_run_job_work(struct work_struct *w) drm_sched_fence_scheduled(s_fence, fence); if (!IS_ERR_OR_NULL(fence)) { - /* Drop for original kref_init of the fence */ - dma_fence_put(fence); - r = dma_fence_add_callback(fence, &sched_job->cb, drm_sched_job_done_cb); if (r == -ENOENT) drm_sched_job_done(sched_job, fence->error); else if (r) DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r); + + dma_fence_put(fence); } else { drm_sched_job_done(sched_job, IS_ERR(fence) ? PTR_ERR(fence) : 0); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 95e17504e46a..d5cd2a78f27c 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -420,10 +420,21 @@ struct drm_sched_backend_ops { struct drm_sched_entity *s_entity); /** - * @run_job: Called to execute the job once all of the dependencies - * have been resolved. This may be called multiple times, if - * timedout_job() has happened and drm_sched_job_recovery() - * decides to try it again. + * @run_job: Called to execute the job once all of the dependencies + * have been resolved. This may be called multiple times, if + * timedout_job() has happened and drm_sched_job_recovery() decides to + * try it again. + * + * @sched_job: the job to run + * + * Returns: dma_fence the driver must signal once the hardware has + * completed the job ("hardware fence"). + * + * Note that the scheduler expects to 'inherit' its own reference to + * this fence from the callback. It does not invoke an extra + * dma_fence_get() on it. Consequently, this callback must take a + * reference for the scheduler, and additional ones for the driver's + * respective needs. */ struct dma_fence *(*run_job)(struct drm_sched_job *sched_job); From patchwork Tue Jan 21 15:15:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philipp Stanner X-Patchwork-Id: 13946417 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF95327702; Tue, 21 Jan 2025 15:16:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737472612; cv=none; b=DOqQso6B7RHnI4FpQ8kFCF9NIdkYIkXoY+fVbLYzBFNxv+da2g8d6qXUoa9OeVtLFquUN8xxeY3nfilGlq6lLaPLY3qXJiLe356YSxbbQflpjGE9tpB2k3mGznLXGiKnQtsyOHrgp7n2xHZd4sZWjeZPuGTcnE7PSYjzjoy7rKc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737472612; c=relaxed/simple; bh=OiXJvgk8kbxD6gc50ANNpufqFzqJSKtUAwGGdTi3SdI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=J+pTK7Z1xb9SGrfYeDkgiazq/DIiNoEHQUJ3glC1a655Xt+iRSEP6GAm0YkoqZBSVMFGQlMWODkxMgJ6MwZ4DYvbSgKA4d02oIQLjSGhvcUgjcHOrjiIjhsZM9ts542yeCShrr1VbhtaPj2ohK5rFIPgiszVYDzp59OJ6UUfu4g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rOXIrtyv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rOXIrtyv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC4DCC4CEDF; Tue, 21 Jan 2025 15:16:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737472611; bh=OiXJvgk8kbxD6gc50ANNpufqFzqJSKtUAwGGdTi3SdI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rOXIrtyvJEOOkDbQQlfrcMjrB7BCN7aOql+jys8FxWvjhI1kKWV7jrRaSEXd3RNZ1 5DMMNqlnIU+FB5/3TriE5r3TB7RzvlKlkum8k/CAsfpph6wve+O99+808wn8eSoGXv VZCgW+GvvODfDJ0W99zFJW2JnIXLIsF4XxGo6RIF2C/dUzAmwN3k9Oobsbd230g35+ qwtZ+y0p+/goOoIqJUh94z5x52doVAVhq2CXKRw+KaZNZ0B3FkpjMgdCixS+sGS3mt FLwTzMp/6sgSDuQ0tFfJS8cAQLh3z3jLrYzgyyTGcDez2IUu/DoOa9MQzSuBhedQi7 F6rw4unU0kMeg== From: Philipp Stanner To: Matthew Brost , Danilo Krummrich , Philipp Stanner , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Philipp Stanner Subject: [PATCH v2 2/3] drm/sched: Adjust outdated docu for run_job() Date: Tue, 21 Jan 2025 16:15:45 +0100 Message-ID: <20250121151544.44949-5-phasta@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250121151544.44949-2-phasta@kernel.org> References: <20250121151544.44949-2-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The documentation for drm_sched_backend_ops.run_job() mentions a certain function called drm_sched_job_recovery(). This function does not exist. What's actually meant is drm_sched_resubmit_jobs(), which is by now also deprecated. Remove the mention of the removed function. Discourage the behavior of drm_sched_backend_ops.run_job() being called multiple times for the same job. Signed-off-by: Philipp Stanner --- Folks, I need input for those "refcount" rules. I say that we either delete that section or someone (Christian?) should provide details about what those rules are, as Danilo requested. P. --- include/drm/gpu_scheduler.h | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index d5cd2a78f27c..cf40fdb55541 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -421,14 +421,19 @@ struct drm_sched_backend_ops { /** * @run_job: Called to execute the job once all of the dependencies - * have been resolved. This may be called multiple times, if - * timedout_job() has happened and drm_sched_job_recovery() decides to - * try it again. + * have been resolved. + * + * The deprecated drm_sched_resubmit_jobs() (called from + * drm_sched_backend_ops.timedout_job()) can invoke this again with the + * same parameters. Doing this is strongly discouraged because it + * violates dma_fence rules. * * @sched_job: the job to run * - * Returns: dma_fence the driver must signal once the hardware has - * completed the job ("hardware fence"). + * Returns: + * On success: dma_fence the driver must signal once the hardware has + * completed the job ("hardware fence"). + * On failure: NULL or an ERR_PTR. * * Note that the scheduler expects to 'inherit' its own reference to * this fence from the callback. It does not invoke an extra From patchwork Tue Jan 21 15:15:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philipp Stanner X-Patchwork-Id: 13946418 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 754041F429B; Tue, 21 Jan 2025 15:16:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737472615; cv=none; b=fJu9MaMPC8LSwJXh8t2oaCNGPSiea68PUm7gysAJUR398pPx1LrRSfSF305LzkgS6l10gu6e3KwhkwL/KrZNtXa9A5+M7j/V1S8vWmeVVA+0VQUrpbj6GFevgH9DwrcFxYFY3ubwYbLPrB8LAsFfYmop5pejiBCOdRLwEsPxF3A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737472615; c=relaxed/simple; bh=FDRipuPPvfmCDCeAd0Ls0RxootY4kCESdc8OiZt7GIw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Kynnz0CwO/30889cUYi5kAP8yCMgk82OWfvyETc07Ditd0VFo3l31l3ft2lAqheeXveGiQoaNoPUbevM2HSVABxRY6vCP5cb4aC7M0OwzRO+ryPr8d5g7NMNnwsKO/dcdeMoliE9q1dY2n/cgdYjq4NFGxr8t/UU748oYieTBUo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iY1fez84; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iY1fez84" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D22AEC4CEE0; Tue, 21 Jan 2025 15:16:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737472615; bh=FDRipuPPvfmCDCeAd0Ls0RxootY4kCESdc8OiZt7GIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iY1fez84iqgBQbrqP4iW04Xuq+aZzIkhRWF6iUdQuMQvg689MvAmXXMKGbCxGrcOu nEce/HGxnYT5fD2ZE4KPyDk2iauORb1sEjfMxiQjiWYuIbqWibGoJQh4xs+v9R/uh6 6JaFZn3lRFq4Cft7H16HAgaF7m5ngialRJQGYqVZcY7D/FmZn6ImkcmXtH/mZOwvcm fkKNnMk965Nb3cCEZJezJTPxArfw8aP6BQ65pz/u7cBYws8YCKG8kc1ukXulA7ctS1 SJu/x1VPdJ1ivSPtE9hvhNwSYyVJsDaBgykhhX3CFzyD8LYtqrVnrquCTd42N/+1Kw iu0JSUz9ekdkA== From: Philipp Stanner To: Matthew Brost , Danilo Krummrich , Philipp Stanner , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Philipp Stanner Subject: [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation Date: Tue, 21 Jan 2025 16:15:46 +0100 Message-ID: <20250121151544.44949-6-phasta@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250121151544.44949-2-phasta@kernel.org> References: <20250121151544.44949-2-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 drm_sched_backend_ops.timedout_job()'s documentation is outdated. It mentions the deprecated function drm_sched_resubmit_job(). Furthermore, it does not point out the important distinction between hardware and firmware schedulers. Since firmware schedulers tyipically only use one entity per scheduler, timeout handling is significantly more simple because the entity the faulted job came from can just be killed without affecting innocent processes. Update the documentation with that distinction and other details. Reformat the docstring to work to a unified style with the other handles. Signed-off-by: Philipp Stanner --- include/drm/gpu_scheduler.h | 82 ++++++++++++++++++++++--------------- 1 file changed, 49 insertions(+), 33 deletions(-) diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index cf40fdb55541..4806740b9023 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -394,8 +394,14 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, } enum drm_gpu_sched_stat { - DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */ + /* Reserve 0 */ + DRM_GPU_SCHED_STAT_NONE, + + /* Operation succeeded */ DRM_GPU_SCHED_STAT_NOMINAL, + + /* Failure because dev is no longer available, for example because + * it was unplugged. */ DRM_GPU_SCHED_STAT_ENODEV, }; @@ -447,43 +453,53 @@ struct drm_sched_backend_ops { * @timedout_job: Called when a job has taken too long to execute, * to trigger GPU recovery. * - * This method is called in a workqueue context. + * @sched_job: The job that has timed out * - * Drivers typically issue a reset to recover from GPU hangs, and this - * procedure usually follows the following workflow: + * Returns: A drm_gpu_sched_stat enum. * - * 1. Stop the scheduler using drm_sched_stop(). This will park the - * scheduler thread and cancel the timeout work, guaranteeing that - * nothing is queued while we reset the hardware queue - * 2. Try to gracefully stop non-faulty jobs (optional) - * 3. Issue a GPU reset (driver-specific) - * 4. Re-submit jobs using drm_sched_resubmit_jobs() - * 5. Restart the scheduler using drm_sched_start(). At that point, new - * jobs can be queued, and the scheduler thread is unblocked + * Drivers typically issue a reset to recover from GPU hangs. + * This procedure looks very different depending on whether a firmware + * or a hardware scheduler is being used. + * + * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one scheduler, and + * each scheduler has one entity. Hence, you typically follow those + * steps: + * + * 1. Stop the scheduler using drm_sched_stop(). This will pause the + * scheduler workqueues and cancel the timeout work, guaranteeing + * that nothing is queued while we remove the ring. + * 2. Remove the ring. In most (all?) cases the firmware will make sure + * that the corresponding parts of the hardware are resetted, and that + * other rings are not impacted. + * 3. Kill the entity the faulted job stems from, and the associated + * scheduler. + * + * + * For a HARDWARE SCHEDULER, each ring also has one scheduler, but each + * scheduler is typically associated with many entities. This implies + * that all entities associated with the affected scheduler cannot be + * torn down, because this would effectively also kill innocent + * userspace processes which did not submit faulty jobs (for example). + * + * Consequently, the procedure to recover with a hardware scheduler + * should look like this: + * + * 1. Stop all schedulers impacted by the reset using drm_sched_stop(). + * 2. Figure out to which entity the faulted job belongs to. + * 3. Kill that entity. + * 4. Issue a GPU reset on all faulty rings (driver-specific). + * 5. Re-submit jobs on all schedulers impacted by re-submitting them to + * the entities which are still alive. + * 6. Restart all schedulers that were stopped in step #1 using + * drm_sched_start(). * * Note that some GPUs have distinct hardware queues but need to reset * the GPU globally, which requires extra synchronization between the - * timeout handler of the different &drm_gpu_scheduler. One way to - * achieve this synchronization is to create an ordered workqueue - * (using alloc_ordered_workqueue()) at the driver level, and pass this - * queue to drm_sched_init(), to guarantee that timeout handlers are - * executed sequentially. The above workflow needs to be slightly - * adjusted in that case: - * - * 1. Stop all schedulers impacted by the reset using drm_sched_stop() - * 2. Try to gracefully stop non-faulty jobs on all queues impacted by - * the reset (optional) - * 3. Issue a GPU reset on all faulty queues (driver-specific) - * 4. Re-submit jobs on all schedulers impacted by the reset using - * drm_sched_resubmit_jobs() - * 5. Restart all schedulers that were stopped in step #1 using - * drm_sched_start() - * - * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal, - * and the underlying driver has started or completed recovery. - * - * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer - * available, i.e. has been unplugged. + * timeout handlers of different schedulers. One way to achieve this + * synchronization is to create an ordered workqueue (using + * alloc_ordered_workqueue()) at the driver level, and pass this queue + * as drm_sched_init()'s @timeout_wq parameter. This will guarantee + * that timeout handlers are executed sequentially. */ enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);