From patchwork Mon Mar 31 20:16:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 183D4C36014 for ; Mon, 31 Mar 2025 20:17:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2B1BA10E480; Mon, 31 Mar 2025 20:17:31 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="YCxqHWu1"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0BC0710E47E; Mon, 31 Mar 2025 20:17:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=cvqcnUTGixSXd8sEJm9ij3tsrxHA4UT/dcdcK7ntKWs=; b=YCxqHWu1WqeQSbGs2Y+O5Dc9kI jcjMfKTwaCYWiw01kEdgxpOHWodj5fYiyCqTwG8IZaGqMlwgFQkSYeJ7vI7CQF8t2lJJA+OyQoqIC Wx8gxuJ3Z5BoqBSFcIOQTBS34H3XVvWsjHeG/uaMDTGthLBmmlRfeiR4yVdEiUlA8BXKShzFy5eCg ltv7cKqlPYTOnB5vDYJqSi0VXtpEkyojpRr/h1ZlbtdEiHQp1GKdmFOQTtcorVCu2ZXAsRp4KIYLB 5H1Uh/R5oMZlAYE8fOk/zbfV8jlopk9KmPtBDSRAq2Je49K2jEdvzZvDzZT1CNJWtVrod+c9rIdS5 oAHr6tbA==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZW-009M2k-Lb; Mon, 31 Mar 2025 22:17:26 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner , Pierre-Eric Pelloux-Prayer Subject: [RFC v3 01/14] drm/sched: Add some scheduling quality unit tests Date: Mon, 31 Mar 2025 21:16:52 +0100 Message-ID: <20250331201705.60663-2-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner Cc: Pierre-Eric Pelloux-Prayer --- drivers/gpu/drm/scheduler/tests/Makefile | 3 +- .../gpu/drm/scheduler/tests/tests_scheduler.c | 548 ++++++++++++++++++ 2 files changed, 550 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/drm/scheduler/tests/tests_scheduler.c diff --git a/drivers/gpu/drm/scheduler/tests/Makefile b/drivers/gpu/drm/scheduler/tests/Makefile index 5bf707bad373..9ec185fbbc15 100644 --- a/drivers/gpu/drm/scheduler/tests/Makefile +++ b/drivers/gpu/drm/scheduler/tests/Makefile @@ -2,6 +2,7 @@ drm-sched-tests-y := \ mock_scheduler.o \ - tests_basic.o + tests_basic.o \ + tests_scheduler.o obj-$(CONFIG_DRM_SCHED_KUNIT_TEST) += drm-sched-tests.o diff --git a/drivers/gpu/drm/scheduler/tests/tests_scheduler.c b/drivers/gpu/drm/scheduler/tests/tests_scheduler.c new file mode 100644 index 000000000000..aa37c0dc8d66 --- /dev/null +++ b/drivers/gpu/drm/scheduler/tests/tests_scheduler.c @@ -0,0 +1,548 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2025 Valve Corporation */ + +#include +#include +#include + +#include "sched_tests.h" + +/* + * DRM scheduler scheduler tests exercise load balancing decisions ie. entity + * selection logic. + */ + +static int drm_sched_scheduler_init(struct kunit *test) +{ + struct drm_mock_scheduler *sched; + + sched = drm_mock_sched_new(test, MAX_SCHEDULE_TIMEOUT); + sched->base.credit_limit = 1; + + test->priv = sched; + + return 0; +} + +static void drm_sched_scheduler_exit(struct kunit *test) +{ + struct drm_mock_scheduler *sched = test->priv; + + drm_mock_sched_fini(sched); +} + +static void drm_sched_scheduler_queue_overhead(struct kunit *test) +{ + struct drm_mock_scheduler *sched = test->priv; + struct drm_mock_sched_entity *entity; + const unsigned int job_us = 1000; + const unsigned int jobs = 1000; + const unsigned int total_us = jobs * job_us; + struct drm_mock_sched_job *job, *first; + ktime_t start, end; + bool done; + int i; + + /* + * Deep queue job at a time processing (single credit). + * + * This measures the overhead of picking and processing a job at a time + * by comparing the ideal total "GPU" time of all submitted jobs versus + * the time actually taken. + */ + + KUNIT_ASSERT_EQ(test, sched->base.credit_limit, 1); + + entity = drm_mock_sched_entity_new(test, + DRM_SCHED_PRIORITY_NORMAL, + sched); + + for (i = 0; i <= jobs; i++) { + job = drm_mock_sched_job_new(test, entity); + if (i == 0) + first = job; /* Extra first job blocks the queue */ + else + drm_mock_sched_job_set_duration_us(job, job_us); + drm_mock_sched_job_submit(job); + } + + done = drm_mock_sched_job_wait_scheduled(first, HZ); + KUNIT_ASSERT_TRUE(test, done); + + start = ktime_get(); + i = drm_mock_sched_advance(sched, 1); /* Release the queue */ + KUNIT_ASSERT_EQ(test, i, 1); + + done = drm_mock_sched_job_wait_finished(job, + usecs_to_jiffies(total_us) * 5); + end = ktime_get(); + KUNIT_ASSERT_TRUE(test, done); + + pr_info("Expected %uus, actual %lldus\n", + total_us, + ktime_to_us(ktime_sub(end, start))); + + drm_mock_sched_entity_free(entity); +} + +static void drm_sched_scheduler_ping_pong(struct kunit *test) +{ + struct drm_mock_sched_job *job, *first, *prev = NULL; + struct drm_mock_scheduler *sched = test->priv; + struct drm_mock_sched_entity *entity[2]; + const unsigned int job_us = 1000; + const unsigned int jobs = 1000; + const unsigned int total_us = jobs * job_us; + ktime_t start, end; + bool done; + int i; + + /* + * Two entitites in inter-dependency chain. + * + * This measures the overhead of picking and processing a job at a time, + * where each job depends on the previous one from the diffferent + * entity, by comparing the ideal total "GPU" time of all submitted jobs + * versus the time actually taken. + */ + + KUNIT_ASSERT_EQ(test, sched->base.credit_limit, 1); + + for (i = 0; i < ARRAY_SIZE(entity); i++) + entity[i] = drm_mock_sched_entity_new(test, + DRM_SCHED_PRIORITY_NORMAL, + sched); + + for (i = 0; i <= jobs; i++) { + job = drm_mock_sched_job_new(test, entity[i & 1]); + if (i == 0) + first = job; /* Extra first job blocks the queue */ + else + drm_mock_sched_job_set_duration_us(job, job_us); + if (prev) + drm_sched_job_add_dependency(&job->base, + dma_fence_get(&prev->base.s_fence->finished)); + drm_mock_sched_job_submit(job); + prev = job; + } + + done = drm_mock_sched_job_wait_scheduled(first, HZ); + KUNIT_ASSERT_TRUE(test, done); + + start = ktime_get(); + i = drm_mock_sched_advance(sched, 1); /* Release the queue */ + KUNIT_ASSERT_EQ(test, i, 1); + + done = drm_mock_sched_job_wait_finished(job, + usecs_to_jiffies(total_us) * 5); + end = ktime_get(); + KUNIT_ASSERT_TRUE(test, done); + + pr_info("Expected %uus, actual %lldus\n", + total_us, + ktime_to_us(ktime_sub(end, start))); + + for (i = 0; i < ARRAY_SIZE(entity); i++) + drm_mock_sched_entity_free(entity[i]); +} + +static struct kunit_case drm_sched_scheduler_overhead_tests[] = { + KUNIT_CASE_SLOW(drm_sched_scheduler_queue_overhead), + KUNIT_CASE_SLOW(drm_sched_scheduler_ping_pong), + {} +}; + +static struct kunit_suite drm_sched_scheduler_overhead = { + .name = "drm_sched_scheduler_overhead_tests", + .init = drm_sched_scheduler_init, + .exit = drm_sched_scheduler_exit, + .test_cases = drm_sched_scheduler_overhead_tests, +}; + +struct drm_sched_client_params { + enum drm_sched_priority priority; + unsigned int job_cnt; + unsigned int job_us; + unsigned int wait_us; + bool sync; +}; + +struct drm_sched_test_params { + const char *description; + struct drm_sched_client_params client[2]; +}; + +static const struct drm_sched_test_params drm_sched_cases[] = { + { + .description = "Normal and normal", + .client[0] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 8000, + .wait_us = 0, + .sync = false, + }, + .client[1] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 8000, + .wait_us = 0, + .sync = false, + }, + }, + { + .description = "Normal and low", + .client[0] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 8000, + .wait_us = 0, + .sync = false, + }, + .client[1] = { + .priority = DRM_SCHED_PRIORITY_LOW, + .job_cnt = 1, + .job_us = 8000, + .wait_us = 0, + .sync = false, + }, + }, + { + .description = "High and normal", + .client[0] = { + .priority = DRM_SCHED_PRIORITY_HIGH, + .job_cnt = 1, + .job_us = 8000, + .wait_us = 0, + .sync = false, + }, + .client[1] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 8000, + .wait_us = 0, + .sync = false, + }, + }, + { + .description = "High and low", + .client[0] = { + .priority = DRM_SCHED_PRIORITY_HIGH, + .job_cnt = 1, + .job_us = 8000, + .wait_us = 0, + .sync = false, + }, + .client[1] = { + .priority = DRM_SCHED_PRIORITY_LOW, + .job_cnt = 1, + .job_us = 8000, + .wait_us = 0, + .sync = false, + }, + }, + { + .description = "50 and 50", + .client[0] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 1500, + .wait_us = 1500, + .sync = true, + }, + .client[1] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 2500, + .wait_us = 2500, + .sync = true, + }, + }, + { + .description = "50 and 50 low", + .client[0] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 1500, + .wait_us = 1500, + .sync = true, + }, + .client[1] = { + .priority = DRM_SCHED_PRIORITY_LOW, + .job_cnt = 1, + .job_us = 2500, + .wait_us = 2500, + .sync = true, + }, + }, + { + .description = "50 high and 50", + .client[0] = { + .priority = DRM_SCHED_PRIORITY_HIGH, + .job_cnt = 1, + .job_us = 1500, + .wait_us = 1500, + .sync = true, + }, + .client[1] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 2500, + .wait_us = 2500, + .sync = true, + }, + }, + { + .description = "Low hog and interactive", + .client[0] = { + .priority = DRM_SCHED_PRIORITY_LOW, + .job_cnt = 3, + .job_us = 2500, + .wait_us = 500, + .sync = false, + }, + .client[1] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 500, + .wait_us = 10000, + .sync = true, + }, + }, + { + .description = "Heavy and interactive", + .client[0] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 3, + .job_us = 2500, + .wait_us = 2500, + .sync = true, + }, + .client[1] = { + .priority = DRM_SCHED_PRIORITY_NORMAL, + .job_cnt = 1, + .job_us = 1000, + .wait_us = 9000, + .sync = true, + }, + }, +}; + +static void +drm_sched_desc(const struct drm_sched_test_params *params, char *desc) +{ + strscpy(desc, params->description, KUNIT_PARAM_DESC_SIZE); +} + +KUNIT_ARRAY_PARAM(drm_sched_scheduler_two_clients, + drm_sched_cases, + drm_sched_desc); + +struct test_client { + struct kunit *test; + + struct drm_mock_sched_entity *entity; + + struct kthread_worker *worker; + struct kthread_work work; + + unsigned int id; + ktime_t duration; + + struct drm_sched_client_params params; + + ktime_t ideal_duration; + unsigned int cycles; + unsigned int cycle; + ktime_t start; + ktime_t end; + bool done; + +}; + +static void drm_sched_client_work(struct kthread_work *work) +{ + struct test_client *client = container_of(work, typeof(*client), work); + unsigned int cycle, work_us, period_us, sync_wait, exit_wait; + struct drm_mock_sched_job *job = NULL; + + work_us = client->params.job_cnt * client->params.job_us; + period_us = work_us + client->params.wait_us; + client->cycles = DIV_ROUND_UP(ktime_to_us(client->duration), period_us); + client->ideal_duration = us_to_ktime(client->cycles * period_us); + sync_wait = msecs_to_jiffies(work_us / 1000) * 3; + exit_wait = msecs_to_jiffies(ktime_to_ms(client->duration)) * 5; + + client->start = ktime_get(); + + for (cycle = 0; cycle < client->cycles; cycle++) { + unsigned int batch; + + if (READ_ONCE(client->done)) + break; + + for (batch = 0; batch < client->params.job_cnt; batch++) { + job = drm_mock_sched_job_new(client->test, + client->entity); + drm_mock_sched_job_set_duration_us(job, + client->params.job_us); + drm_mock_sched_job_submit(job); + } + + if (client->params.sync) + drm_mock_sched_job_wait_finished(job, sync_wait); + + WRITE_ONCE(client->cycle, cycle); + + if (READ_ONCE(client->done)) + break; + + if (client->params.wait_us) + fsleep(client->params.wait_us); + else + cond_resched(); + } + + client->done = drm_mock_sched_job_wait_finished(job, exit_wait); + client->end = ktime_get(); +} + +static const char *prio_str(enum drm_sched_priority prio) +{ + switch (prio) { + case DRM_SCHED_PRIORITY_KERNEL: + return "kernel"; + case DRM_SCHED_PRIORITY_LOW: + return "low"; + case DRM_SCHED_PRIORITY_NORMAL: + return "normal"; + case DRM_SCHED_PRIORITY_HIGH: + return "high"; + default: + return "???"; + } +} + +static void drm_sched_scheduler_two_clients_test(struct kunit *test) +{ + const struct drm_sched_test_params *params = test->param_value; + struct drm_mock_scheduler *sched = test->priv; + struct test_client client[2] = { }; + unsigned int prev_cycle[2] = { }; + unsigned int i, j; + ktime_t start; + + /* + * Same job stream from from two clients. + */ + + for (i = 0; i < ARRAY_SIZE(client); i++) + client[i].entity = + drm_mock_sched_entity_new(test, + params->client[i].priority, + sched); + + for (i = 0; i < ARRAY_SIZE(client); i++) { + client[i].test = test; + client[i].id = i; + client[i].duration = ms_to_ktime(1000); + client[i].params = params->client[i]; + client[i].worker = + kthread_create_worker(0, "%s-%u", __func__, i); + if (IS_ERR(client[i].worker)) { + for (j = 0; j < i; j++) + kthread_destroy_worker(client[j].worker); + KUNIT_FAIL(test, "Failed to create worker!\n"); + } + + kthread_init_work(&client[i].work, drm_sched_client_work); + } + + for (i = 0; i < ARRAY_SIZE(client); i++) + kthread_queue_work(client[i].worker, &client[i].work); + + /* + * The clients (workers) can be a mix of async (deep submission queue), + * sync (one job at a time), or something in between. Therefore it is + * difficult to display a single metric representing their progress. + * + * Each struct drm_sched_client_params describes the actual submission + * pattern which happens in the following steps: + * 1. Submit N jobs + * 2. Wait for last submitted job to finish + * 3. Sleep for U micro-seconds + * 4. Goto 1. for C cycles + * + * Where number of cycles is calculated to match the target client + * duration from the respective struct drm_sched_test_params. + * + * To asses scheduling behaviour what we output for both clients is: + * - pct: Percentage progress of the jobs submitted + * - cps: "Cycles" per second (where one cycle is one 1.-4. above) + * - qd: Number of outstanding jobs in the client/entity + */ + + start = ktime_get(); + pr_info("%s:\n\t pct1 cps1 qd1; pct2 cps2 qd2\n", + params->description); + while (!READ_ONCE(client[0].done) || !READ_ONCE(client[1].done)) { + unsigned int pct[2], qd[2], cycle[2], cps[2]; + + for (i = 0; i < ARRAY_SIZE(client); i++) { + qd[i] = spsc_queue_count(&client[i].entity->base.job_queue); + cycle[i] = READ_ONCE(client[i].cycle); + cps[i] = DIV_ROUND_UP(1000 * (cycle[i] - prev_cycle[i]), + 100); + if (client[i].cycles) + pct[i] = DIV_ROUND_UP(100 * cycle[i], + client[i].cycles); + else + pct[i] = 0; + prev_cycle[i] = cycle[i]; + } + pr_info("\t+%6lldms: %3u %5u %4u; %3u %5u %4u\n", + ktime_to_ms(ktime_sub(ktime_get(), start)), + pct[0], cps[0], qd[0], + pct[1], cps[1], qd[1]); + msleep(100); + } + + for (i = 0; i < ARRAY_SIZE(client); i++) { + kthread_flush_work(&client[i].work); + kthread_destroy_worker(client[i].worker); + } + + for (i = 0; i < ARRAY_SIZE(client); i++) + KUNIT_ASSERT_TRUE(test, client[i].done); + + for (i = 0; i < ARRAY_SIZE(client); i++) { + pr_info(" %u: prio=%s sync=%u elapsed_ms=%lldms (ideal_ms=%lldms)", + i, + prio_str(params->client[i].priority), + params->client[i].sync, + ktime_to_ms(ktime_sub(client[i].end, client[i].start)), + ktime_to_ms(client[i].ideal_duration)); + drm_mock_sched_entity_free(client[i].entity); + } +} + +static const struct kunit_attributes drm_sched_scheduler_two_clients_attr = { + .speed = KUNIT_SPEED_SLOW, +}; + +static struct kunit_case drm_sched_scheduler_two_clients_tests[] = { + KUNIT_CASE_PARAM_ATTR(drm_sched_scheduler_two_clients_test, + drm_sched_scheduler_two_clients_gen_params, + drm_sched_scheduler_two_clients_attr), + {} +}; + +static struct kunit_suite drm_sched_scheduler_two_clients = { + .name = "drm_sched_scheduler_two_clients_tests", + .init = drm_sched_scheduler_init, + .exit = drm_sched_scheduler_exit, + .test_cases = drm_sched_scheduler_two_clients_tests, +}; + +kunit_test_suites(&drm_sched_scheduler_overhead, + &drm_sched_scheduler_two_clients); From patchwork Mon Mar 31 20:16:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2E632C36016 for ; Mon, 31 Mar 2025 20:17:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 27CD810E47E; Mon, 31 Mar 2025 20:17:31 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="Ge7zzRt0"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2E05B10E480; Mon, 31 Mar 2025 20:17:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=u+TCTpXdisnOh/NR2PxHOBvedU+Qj9xWJ8Lg5Hc4EIw=; b=Ge7zzRt0SV5XxbBzM1ekahjsQM 0adTyDhgtdSf0BZvgadymUFW6C10me4l4VKqmGxZ0cw0sbiGk/ofdGZ1Zt78K4GNe5E6beS9wc5lU arT27/lY4tlYxKkynCn90IMHRt7eVnMtkDPXU3Wd49wDz5cVfb9Fw8I+rudWrfL2tWvYpYijFKyP6 ZWJ/5hb5Nl41seWv/HCC0BdeH7Nk1mHzSNSo2Uy+8rTPm1yHUnKoDARc//8eqgc5/g1bJr0zhD7kz B29TTLSIUgVJOEMw4TFTeY2Hq0Dk2HXVIIj5u2eZegAZi9ZDN3lrobS6tWGNPbktk7133lcM3z5Gd Qf/HzAjQ==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZX-009M2m-Dc; Mon, 31 Mar 2025 22:17:27 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 02/14] drm/sched: Avoid double re-lock on the job free path Date: Mon, 31 Mar 2025 21:16:53 +0100 Message-ID: <20250331201705.60663-3-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Currently the job free work item will lock sched->job_list_lock first time to see if there are any jobs, free a single job, and then lock again to decide whether to re-queue itself if there are more finished jobs. Since drm_sched_get_finished_job() already looks at the second job in the queue we can simply extend the timestamp check with the full signaled check and have it return the presence of more jobs to free to the caller. That way the work item does not have to lock the list again and repeat the very similar check. v2: * Consolidate to single dma_fence_is_signaled check. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_main.c | 40 ++++++++++---------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index ca5028f7a4e9..4a4c07d0163c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -371,22 +371,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched) queue_work(sched->submit_wq, &sched->work_free_job); } -/** - * drm_sched_run_free_queue - enqueue free-job work if ready - * @sched: scheduler instance - */ -static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched) -{ - struct drm_sched_job *job; - - spin_lock(&sched->job_list_lock); - job = list_first_entry_or_null(&sched->pending_list, - struct drm_sched_job, list); - if (job && dma_fence_is_signaled(&job->s_fence->finished)) - __drm_sched_run_free_queue(sched); - spin_unlock(&sched->job_list_lock); -} - /** * drm_sched_job_done - complete a job * @s_job: pointer to the job which is done @@ -1103,12 +1087,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) * drm_sched_get_finished_job - fetch the next finished job to be destroyed * * @sched: scheduler instance + * @have_more: are there more finished jobs on the list * * Returns the next finished job from the pending list (if there is one) * ready for it to be destroyed. */ static struct drm_sched_job * -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched) +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more) { struct drm_sched_job *job, *next; @@ -1116,22 +1101,24 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched) job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list); - if (job && dma_fence_is_signaled(&job->s_fence->finished)) { /* remove job from pending_list */ list_del_init(&job->list); /* cancel this job's TO timer */ cancel_delayed_work(&sched->work_tdr); - /* make the scheduled timestamp more accurate */ + + *have_more = false; next = list_first_entry_or_null(&sched->pending_list, typeof(*next), list); - if (next) { - if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, - &next->s_fence->scheduled.flags)) + if (dma_fence_is_signaled(&next->s_fence->finished)) { + *have_more = true; + /* make the scheduled timestamp more accurate */ next->s_fence->scheduled.timestamp = dma_fence_timestamp(&job->s_fence->finished); + } + /* start TO timer for next job */ drm_sched_start_timeout(sched); } @@ -1190,12 +1177,15 @@ static void drm_sched_free_job_work(struct work_struct *w) struct drm_gpu_scheduler *sched = container_of(w, struct drm_gpu_scheduler, work_free_job); struct drm_sched_job *job; + bool have_more; - job = drm_sched_get_finished_job(sched); - if (job) + job = drm_sched_get_finished_job(sched, &have_more); + if (job) { sched->ops->free_job(job); + if (have_more) + __drm_sched_run_free_queue(sched); + } - drm_sched_run_free_queue(sched); drm_sched_run_job_queue(sched); } From patchwork Mon Mar 31 20:16:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D7259C36017 for ; Mon, 31 Mar 2025 20:17:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 02FC410E48C; Mon, 31 Mar 2025 20:17:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="F5T9eNqJ"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id F22DE10E47B; Mon, 31 Mar 2025 20:17:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=0UeIhW5wnZTI9hyYIqYK6KBHqOuuUlNTKq1OWnN8SCk=; b=F5T9eNqJg/6A+GH7iAwvCa9Cws sLlfzvoa1tLSDPEmt3VGXV0zNOnpSFmP407CDO3DzZrkgSHFu+UUBky4WZ1T9GBTkZq4k5dCg+pzj t7KtCKqb59v3wwAQvq702mQWa0SOCgZDQe8kt4meT1ufevxYkJ4rqRjipZO6tpG26AWA/9Hyo/Qyx QNZ3aqC7jhZ1vjR1z9Yxfmf6LGizJlhcv9ZT4fS3g06zfTZtrZZg9RwTHkXC8iJthFtkCMlZnLq1+ 103U+kZDFOGrkyNnhzv1/eExUqHrJ7RomGRwjWhaDp1ormqbWotxu/AxPxN2tklRBfA3Fn69cIQmC 54gyD2AQ==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZY-009M2u-5n; Mon, 31 Mar 2025 22:17:28 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 03/14] drm/sched: Consolidate drm_sched_job_timedout Date: Mon, 31 Mar 2025 21:16:54 +0100 Message-ID: <20250331201705.60663-4-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Reduce to one spin_unlock for hopefully a little bit clearer flow in the function. It may appear that there is a behavioural change with the drm_sched_start_timeout_unlocked() now not being called if there were initially no jobs on the pending list, and then some appeared after unlock, however if the code would rely on the TDR handler restarting itself then it would fail to do that if the job arrived on the pending list after the check. Also fix one stale comment while touching the function. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_main.c | 37 +++++++++++++------------- 1 file changed, 18 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 4a4c07d0163c..f593b88ab02c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -522,38 +522,37 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job) static void drm_sched_job_timedout(struct work_struct *work) { - struct drm_gpu_scheduler *sched; + struct drm_gpu_scheduler *sched = + container_of(work, struct drm_gpu_scheduler, work_tdr.work); + enum drm_gpu_sched_stat status; struct drm_sched_job *job; - enum drm_gpu_sched_stat status = DRM_GPU_SCHED_STAT_NOMINAL; - - sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work); /* Protects against concurrent deletion in drm_sched_get_finished_job */ spin_lock(&sched->job_list_lock); job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list); - if (job) { /* * Remove the bad job so it cannot be freed by concurrent - * drm_sched_cleanup_jobs. It will be reinserted back after sched->thread - * is parked at which point it's safe. + * drm_sched_get_finished_job. It will be reinserted back after + * scheduler worker is stopped at which point it's safe. */ list_del_init(&job->list); - spin_unlock(&sched->job_list_lock); + } + spin_unlock(&sched->job_list_lock); - status = job->sched->ops->timedout_job(job); + if (!job) + return; - /* - * Guilty job did complete and hence needs to be manually removed - * See drm_sched_stop doc. - */ - if (sched->free_guilty) { - job->sched->ops->free_job(job); - sched->free_guilty = false; - } - } else { - spin_unlock(&sched->job_list_lock); + status = job->sched->ops->timedout_job(job); + + /* + * Guilty job did complete and hence needs to be manually removed. See + * documentation for drm_sched_stop. + */ + if (sched->free_guilty) { + job->sched->ops->free_job(job); + sched->free_guilty = false; } if (status != DRM_GPU_SCHED_STAT_ENODEV) From patchwork Mon Mar 31 20:16:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3FF6BC3601A for ; Mon, 31 Mar 2025 20:17:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 49AE010E48E; Mon, 31 Mar 2025 20:17:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="cIY4tklA"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id A5FFC10E47E; Mon, 31 Mar 2025 20:17:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=+s24y53yv1Drh0qcWUBpAVVwkBUrPCnE/JQYBIlHJiU=; b=cIY4tklAmzY0BpDTG1H6UEzuGb GTWSWYio66AUk0dqzGojQk771Gj86LrC5MPikMcPTC1sQeC/V1q+ZKQuJvHtGiyavg+8WoBEFWI4q pqzIRnmgspWMNGWp+5KypS8ajTXvrnm+/Ewv7o0HxMXSMCoHS6/Qn7J6+X+vq5D+N/JX3TelcTqqS gtc4iQq/blR277cFAYG4gvuwGTO70gK+zBZiGivHemHe66AYg3E8eY6URjq4ar8Gw7ZBMZZbGkCVb X9ts2JjtIOP8JN06Jg/8SqTxN43eLOXIrEBoxWxJG3jUS5C9I0PEIzNsXLoVIL448Bit4tZlKeQWI t05B59cw==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZY-009M30-Tf; Mon, 31 Mar 2025 22:17:29 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 04/14] drm/sched: Clarify locked section in drm_sched_rq_select_entity_fifo Date: Mon, 31 Mar 2025 21:16:55 +0100 Message-ID: <20250331201705.60663-5-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rq->lock only protects the tree walk so lets move the rest out. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_main.c | 31 ++++++++++++++------------ 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index f593b88ab02c..357133e6d4d0 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -326,29 +326,32 @@ static struct drm_sched_entity * drm_sched_rq_select_entity_fifo(struct drm_gpu_scheduler *sched, struct drm_sched_rq *rq) { + struct drm_sched_entity *entity = NULL; struct rb_node *rb; spin_lock(&rq->lock); for (rb = rb_first_cached(&rq->rb_tree_root); rb; rb = rb_next(rb)) { - struct drm_sched_entity *entity; - entity = rb_entry(rb, struct drm_sched_entity, rb_tree_node); - if (drm_sched_entity_is_ready(entity)) { - /* If we can't queue yet, preserve the current entity in - * terms of fairness. - */ - if (!drm_sched_can_queue(sched, entity)) { - spin_unlock(&rq->lock); - return ERR_PTR(-ENOSPC); - } - - reinit_completion(&entity->entity_idle); + if (drm_sched_entity_is_ready(entity)) break; - } + else + entity = NULL; } spin_unlock(&rq->lock); - return rb ? rb_entry(rb, struct drm_sched_entity, rb_tree_node) : NULL; + if (!entity) + return NULL; + + /* + * If scheduler cannot take more jobs signal the caller to not consider + * lower priority queues. + */ + if (!drm_sched_can_queue(sched, entity)) + return ERR_PTR(-ENOSPC); + + reinit_completion(&entity->entity_idle); + + return entity; } /** From patchwork Mon Mar 31 20:16:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4D2ACC36014 for ; Mon, 31 Mar 2025 20:17:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7DB9810E499; Mon, 31 Mar 2025 20:17:44 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="m2PR0MJE"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5DA8810E481; Mon, 31 Mar 2025 20:17:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=/NMBC9IFn5tcRJoB+UhKNHMuDWkd6EHsyJqiwFGPQe8=; b=m2PR0MJE16kIQDNf+PZgD9BJ0E +1/bwL4O3jv/6Jf0tPYaCrGYSjXCDlaiOkngdRuY5HYpBNmMkTEsGV98fBVb5l4Gxpip8BNz+4+9B fiFi/y/gd2FDHw5s3detvHBuNy3LvR4hVaTe0MVALrY4y9aoznwKRwUvJ+0g4V84N3pdVv1D0r4YI oMJEaAjvHU0/vfmb4dmhPa0XH9zDG/vceao0A3OQIVugSZqz0dm3MEJnI4yLUBTqGrmWZHlZNEmS0 FabA5IFbqoJ2WGdF3G2rm0GyQhzv+HIrp8zqsj88tyIvswzTntGed/l/cPmF29K9GR2Rxa4wMX8cY blGPUB1Q==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZZ-009M34-Lg; Mon, 31 Mar 2025 22:17:29 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 05/14] drm/sched: Consolidate drm_sched_rq_select_entity_rr Date: Mon, 31 Mar 2025 21:16:56 +0100 Message-ID: <20250331201705.60663-6-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Extract out two copies of the identical code to function epilogue to make it smaller and more readable. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_main.c | 48 +++++++++++--------------- 1 file changed, 20 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 357133e6d4d0..600904271b01 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -268,38 +268,14 @@ drm_sched_rq_select_entity_rr(struct drm_gpu_scheduler *sched, entity = rq->current_entity; if (entity) { list_for_each_entry_continue(entity, &rq->entities, list) { - if (drm_sched_entity_is_ready(entity)) { - /* If we can't queue yet, preserve the current - * entity in terms of fairness. - */ - if (!drm_sched_can_queue(sched, entity)) { - spin_unlock(&rq->lock); - return ERR_PTR(-ENOSPC); - } - - rq->current_entity = entity; - reinit_completion(&entity->entity_idle); - spin_unlock(&rq->lock); - return entity; - } + if (drm_sched_entity_is_ready(entity)) + goto found; } } list_for_each_entry(entity, &rq->entities, list) { - if (drm_sched_entity_is_ready(entity)) { - /* If we can't queue yet, preserve the current entity in - * terms of fairness. - */ - if (!drm_sched_can_queue(sched, entity)) { - spin_unlock(&rq->lock); - return ERR_PTR(-ENOSPC); - } - - rq->current_entity = entity; - reinit_completion(&entity->entity_idle); - spin_unlock(&rq->lock); - return entity; - } + if (drm_sched_entity_is_ready(entity)) + goto found; if (entity == rq->current_entity) break; @@ -308,6 +284,22 @@ drm_sched_rq_select_entity_rr(struct drm_gpu_scheduler *sched, spin_unlock(&rq->lock); return NULL; + +found: + if (!drm_sched_can_queue(sched, entity)) { + /* + * If scheduler cannot take more jobs signal the caller to not + * consider lower priority queues. + */ + entity = ERR_PTR(-ENOSPC); + } else { + rq->current_entity = entity; + reinit_completion(&entity->entity_idle); + } + + spin_unlock(&rq->lock); + + return entity; } /** From patchwork Mon Mar 31 20:16:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39FB8C36016 for ; Mon, 31 Mar 2025 20:17:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4A05410E490; Mon, 31 Mar 2025 20:17:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="nRJ6uBs4"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id 35A6B10E486; Mon, 31 Mar 2025 20:17:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=tB9H8yMWyZh2iV+fD5BHTBYtLY0SpJOAUqAv2XGh5xw=; b=nRJ6uBs423KgUFG73uhHtDcyFu GAnP/yaXzDukgMCM69ExXawKO/AFQVV7HKSUMyFSyFAx2Pp78iAsZvqA4hBFZkTdeW44dWF/svLQi fQ1gcytgOWN7doQYwLGvL275iYPaMK7MXHUy2rdH/IUP6WB/UZR7QVGdqQK371s2ZegpQ4IexOeji u25cOEvO8FFqLq3NZDrQNP6WVDy2F6KAf9Cp4PGWERY0foEGf0TjviTAqo20/i94WlWl6R/cwO4iB CBpleBOrjZhst5d+2e7frBtt2JcfbMk+mS/ribDj1Hf+DZBXrfr8i2uAe9UQKzEnWPSsmDBfpAIKn KF3UaxoA==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZa-009M3G-Fh; Mon, 31 Mar 2025 22:17:30 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 06/14] drm/sched: Implement RR via FIFO Date: Mon, 31 Mar 2025 21:16:57 +0100 Message-ID: <20250331201705.60663-7-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Round-robin being the non-default policy and unclear how much it is used, we can notice that it can be implemented using the FIFO data structures if we only invent a fake submit timestamp which is monotonically increasing inside drm_sched_rq instances. So instead of remembering which was the last entity the scheduler worker picked, we can bump the picked one to the bottom of the tree, achieving the same round-robin behaviour. Advantage is that we can consolidate to a single code path and remove a bunch of code. Downside is round-robin mode now needs to lock on the job pop path but that should not be visible. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_entity.c | 45 ++++++++------ drivers/gpu/drm/scheduler/sched_main.c | 76 ++---------------------- include/drm/gpu_scheduler.h | 5 +- 3 files changed, 36 insertions(+), 90 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 9b0122e99b44..bbb7f3d3e3e8 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -469,9 +469,19 @@ drm_sched_job_dependency(struct drm_sched_job *job, return NULL; } +static ktime_t +drm_sched_rq_get_rr_deadline(struct drm_sched_rq *rq) +{ + lockdep_assert_held(&rq->lock); + + rq->rr_deadline = ktime_add_ns(rq->rr_deadline, 1); + + return rq->rr_deadline; +} + struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) { - struct drm_sched_job *sched_job; + struct drm_sched_job *sched_job, *next_job; sched_job = drm_sched_entity_queue_peek(entity); if (!sched_job) @@ -506,21 +516,22 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) * Update the entity's location in the min heap according to * the timestamp of the next job, if any. */ - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) { - struct drm_sched_job *next; + next_job = drm_sched_entity_queue_peek(entity); + if (next_job) { + struct drm_sched_rq *rq; + ktime_t ts; - next = drm_sched_entity_queue_peek(entity); - if (next) { - struct drm_sched_rq *rq; + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) + ts = next_job->submit_ts; + else + ts = drm_sched_rq_get_rr_deadline(rq); - spin_lock(&entity->lock); - rq = entity->rq; - spin_lock(&rq->lock); - drm_sched_rq_update_fifo_locked(entity, rq, - next->submit_ts); - spin_unlock(&rq->lock); - spin_unlock(&entity->lock); - } + spin_lock(&entity->lock); + rq = entity->rq; + spin_lock(&rq->lock); + drm_sched_rq_update_fifo_locked(entity, rq, ts); + spin_unlock(&rq->lock); + spin_unlock(&entity->lock); } /* Jobs and entities might have different lifecycles. Since we're @@ -619,9 +630,9 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) spin_lock(&rq->lock); drm_sched_rq_add_entity(rq, entity); - - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) - drm_sched_rq_update_fifo_locked(entity, rq, submit_ts); + if (drm_sched_policy == DRM_SCHED_POLICY_RR) + submit_ts = drm_sched_rq_get_rr_deadline(rq); + drm_sched_rq_update_fifo_locked(entity, rq, submit_ts); spin_unlock(&rq->lock); spin_unlock(&entity->lock); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 600904271b01..e931a9b91083 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -190,7 +190,6 @@ static void drm_sched_rq_init(struct drm_gpu_scheduler *sched, spin_lock_init(&rq->lock); INIT_LIST_HEAD(&rq->entities); rq->rb_tree_root = RB_ROOT_CACHED; - rq->current_entity = NULL; rq->sched = sched; } @@ -236,74 +235,13 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, atomic_dec(rq->sched->score); list_del_init(&entity->list); - if (rq->current_entity == entity) - rq->current_entity = NULL; - - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) - drm_sched_rq_remove_fifo_locked(entity, rq); + drm_sched_rq_remove_fifo_locked(entity, rq); spin_unlock(&rq->lock); } /** - * drm_sched_rq_select_entity_rr - Select an entity which could provide a job to run - * - * @sched: the gpu scheduler - * @rq: scheduler run queue to check. - * - * Try to find the next ready entity. - * - * Return an entity if one is found; return an error-pointer (!NULL) if an - * entity was ready, but the scheduler had insufficient credits to accommodate - * its job; return NULL, if no ready entity was found. - */ -static struct drm_sched_entity * -drm_sched_rq_select_entity_rr(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq) -{ - struct drm_sched_entity *entity; - - spin_lock(&rq->lock); - - entity = rq->current_entity; - if (entity) { - list_for_each_entry_continue(entity, &rq->entities, list) { - if (drm_sched_entity_is_ready(entity)) - goto found; - } - } - - list_for_each_entry(entity, &rq->entities, list) { - if (drm_sched_entity_is_ready(entity)) - goto found; - - if (entity == rq->current_entity) - break; - } - - spin_unlock(&rq->lock); - - return NULL; - -found: - if (!drm_sched_can_queue(sched, entity)) { - /* - * If scheduler cannot take more jobs signal the caller to not - * consider lower priority queues. - */ - entity = ERR_PTR(-ENOSPC); - } else { - rq->current_entity = entity; - reinit_completion(&entity->entity_idle); - } - - spin_unlock(&rq->lock); - - return entity; -} - -/** - * drm_sched_rq_select_entity_fifo - Select an entity which provides a job to run + * drm_sched_rq_select_entity - Select an entity which provides a job to run * * @sched: the gpu scheduler * @rq: scheduler run queue to check. @@ -315,8 +253,8 @@ drm_sched_rq_select_entity_rr(struct drm_gpu_scheduler *sched, * its job; return NULL, if no ready entity was found. */ static struct drm_sched_entity * -drm_sched_rq_select_entity_fifo(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq) +drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, + struct drm_sched_rq *rq) { struct drm_sched_entity *entity = NULL; struct rb_node *rb; @@ -1061,15 +999,13 @@ void drm_sched_wakeup(struct drm_gpu_scheduler *sched) static struct drm_sched_entity * drm_sched_select_entity(struct drm_gpu_scheduler *sched) { - struct drm_sched_entity *entity; + struct drm_sched_entity *entity = NULL; int i; /* Start with the highest priority. */ for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { - entity = drm_sched_policy == DRM_SCHED_POLICY_FIFO ? - drm_sched_rq_select_entity_fifo(sched, sched->sched_rq[i]) : - drm_sched_rq_select_entity_rr(sched, sched->sched_rq[i]); + entity = drm_sched_rq_select_entity(sched, sched->sched_rq[i]); if (entity) break; } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 1a7e377d4cbb..1073cc569cce 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -239,8 +239,7 @@ struct drm_sched_entity { * struct drm_sched_rq - queue of entities to be scheduled. * * @sched: the scheduler to which this rq belongs to. - * @lock: protects @entities, @rb_tree_root and @current_entity. - * @current_entity: the entity which is to be scheduled. + * @lock: protects @entities, @rb_tree_root and @rr_deadline. * @entities: list of the entities to be scheduled. * @rb_tree_root: root of time based priority queue of entities for FIFO scheduling * @@ -253,7 +252,7 @@ struct drm_sched_rq { spinlock_t lock; /* Following members are protected by the @lock: */ - struct drm_sched_entity *current_entity; + ktime_t rr_deadline; struct list_head entities; struct rb_root_cached rb_tree_root; }; From patchwork Mon Mar 31 20:16:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5E9DDC36017 for ; Mon, 31 Mar 2025 20:17:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 03AE810E498; Mon, 31 Mar 2025 20:17:44 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="gV8jeoHL"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id F231E10E488; Mon, 31 Mar 2025 20:17:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=sy/3phK/GEkt8dcfnGI9L7iOTxhkxgzu1anCmSm9IEY=; b=gV8jeoHLMiPzR6tqA10bZqUunl Vp6Y6gTOp1zWnNF0MbjYhf8egSp+WcP/qPpGGjYEuE4S5BbfQnvWV+t+Hur2MohyphsBiBwYi2DQQ 56ghyG/3oXiXOr4d+/FRdQsaXmeTlHUuop6qWQGY1cOct8xyAFblce2uIkvUU3g6Lq4EeCx7hWjGo BQ8BSewn1OzpC/5LlkF/SgNbziTPdAVcJXDwplnLtBZdFONPS73wDwse2p4+CNG4W/5c96+RsjPE5 1rE+Up7ntR0QRrop+dZV5nkpE/k0QPaaffpj6cFnYXzWfTRISmtc/X4kfeziVbpARYezA1Lvvm0A7 ZSiWbHYw==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZb-009M3Z-7m; Mon, 31 Mar 2025 22:17:31 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 07/14] drm/sched: Consolidate entity run queue management Date: Mon, 31 Mar 2025 21:16:58 +0100 Message-ID: <20250331201705.60663-8-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Move the code dealing with entities entering and exiting run queues to helpers to logically separate it from jobs entering and exiting entities. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_entity.c | 60 ++------------- drivers/gpu/drm/scheduler/sched_internal.h | 8 +- drivers/gpu/drm/scheduler/sched_main.c | 87 +++++++++++++++++++--- 3 files changed, 83 insertions(+), 72 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index bbb7f3d3e3e8..8362184fe431 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -469,19 +469,9 @@ drm_sched_job_dependency(struct drm_sched_job *job, return NULL; } -static ktime_t -drm_sched_rq_get_rr_deadline(struct drm_sched_rq *rq) -{ - lockdep_assert_held(&rq->lock); - - rq->rr_deadline = ktime_add_ns(rq->rr_deadline, 1); - - return rq->rr_deadline; -} - struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) { - struct drm_sched_job *sched_job, *next_job; + struct drm_sched_job *sched_job; sched_job = drm_sched_entity_queue_peek(entity); if (!sched_job) @@ -512,27 +502,7 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) spsc_queue_pop(&entity->job_queue); - /* - * Update the entity's location in the min heap according to - * the timestamp of the next job, if any. - */ - next_job = drm_sched_entity_queue_peek(entity); - if (next_job) { - struct drm_sched_rq *rq; - ktime_t ts; - - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) - ts = next_job->submit_ts; - else - ts = drm_sched_rq_get_rr_deadline(rq); - - spin_lock(&entity->lock); - rq = entity->rq; - spin_lock(&rq->lock); - drm_sched_rq_update_fifo_locked(entity, rq, ts); - spin_unlock(&rq->lock); - spin_unlock(&entity->lock); - } + drm_sched_rq_pop_entity(entity); /* Jobs and entities might have different lifecycles. Since we're * removing the job from the entities queue, set the jobs entity pointer @@ -614,30 +584,10 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) /* first job wakes up scheduler */ if (first) { struct drm_gpu_scheduler *sched; - struct drm_sched_rq *rq; - /* Add the entity to the run queue */ - spin_lock(&entity->lock); - if (entity->stopped) { - spin_unlock(&entity->lock); - - DRM_ERROR("Trying to push to a killed entity\n"); - return; - } - - rq = entity->rq; - sched = rq->sched; - - spin_lock(&rq->lock); - drm_sched_rq_add_entity(rq, entity); - if (drm_sched_policy == DRM_SCHED_POLICY_RR) - submit_ts = drm_sched_rq_get_rr_deadline(rq); - drm_sched_rq_update_fifo_locked(entity, rq, submit_ts); - - spin_unlock(&rq->lock); - spin_unlock(&entity->lock); - - drm_sched_wakeup(sched); + sched = drm_sched_rq_add_entity(entity, submit_ts); + if (sched) + drm_sched_wakeup(sched); } } EXPORT_SYMBOL(drm_sched_entity_push_job); diff --git a/drivers/gpu/drm/scheduler/sched_internal.h b/drivers/gpu/drm/scheduler/sched_internal.h index 599cf6e1bb74..8e7e477bace3 100644 --- a/drivers/gpu/drm/scheduler/sched_internal.h +++ b/drivers/gpu/drm/scheduler/sched_internal.h @@ -12,13 +12,11 @@ extern int drm_sched_policy; void drm_sched_wakeup(struct drm_gpu_scheduler *sched); -void drm_sched_rq_add_entity(struct drm_sched_rq *rq, - struct drm_sched_entity *entity); +struct drm_gpu_scheduler * +drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts); void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, struct drm_sched_entity *entity); - -void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, - struct drm_sched_rq *rq, ktime_t ts); +void drm_sched_rq_pop_entity(struct drm_sched_entity *entity); void drm_sched_entity_select_rq(struct drm_sched_entity *entity); struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index e931a9b91083..8736c7cd3ddd 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -150,15 +150,18 @@ static __always_inline bool drm_sched_entity_compare_before(struct rb_node *a, static void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity *entity, struct drm_sched_rq *rq) { + lockdep_assert_held(&entity->lock); + lockdep_assert_held(&rq->lock); + if (!RB_EMPTY_NODE(&entity->rb_tree_node)) { rb_erase_cached(&entity->rb_tree_node, &rq->rb_tree_root); RB_CLEAR_NODE(&entity->rb_tree_node); } } -void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, - struct drm_sched_rq *rq, - ktime_t ts) +static void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, + struct drm_sched_rq *rq, + ktime_t ts) { /* * Both locks need to be grabbed, one to protect from entity->rq change @@ -193,25 +196,58 @@ static void drm_sched_rq_init(struct drm_gpu_scheduler *sched, rq->sched = sched; } +static ktime_t +drm_sched_rq_get_rr_deadline(struct drm_sched_rq *rq) +{ + lockdep_assert_held(&rq->lock); + + rq->rr_deadline = ktime_add_ns(rq->rr_deadline, 1); + + return rq->rr_deadline; +} + /** * drm_sched_rq_add_entity - add an entity * - * @rq: scheduler run queue * @entity: scheduler entity + * @ts: submission timestamp * * Adds a scheduler entity to the run queue. + * + * Returns a DRM scheduler pre-selected to handle this entity. */ -void drm_sched_rq_add_entity(struct drm_sched_rq *rq, - struct drm_sched_entity *entity) +struct drm_gpu_scheduler * +drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts) { - lockdep_assert_held(&entity->lock); - lockdep_assert_held(&rq->lock); + struct drm_gpu_scheduler *sched; + struct drm_sched_rq *rq; - if (!list_empty(&entity->list)) - return; + /* Add the entity to the run queue */ + spin_lock(&entity->lock); + if (entity->stopped) { + spin_unlock(&entity->lock); - atomic_inc(rq->sched->score); - list_add_tail(&entity->list, &rq->entities); + DRM_ERROR("Trying to push to a killed entity\n"); + return NULL; + } + + rq = entity->rq; + spin_lock(&rq->lock); + sched = rq->sched; + + if (list_empty(&entity->list)) { + atomic_inc(sched->score); + list_add_tail(&entity->list, &rq->entities); + } + + if (drm_sched_policy == DRM_SCHED_POLICY_RR) + ts = drm_sched_rq_get_rr_deadline(rq); + drm_sched_rq_update_fifo_locked(entity, rq, ts); + + spin_unlock(&rq->lock); + spin_unlock(&entity->lock); + + return sched; } /** @@ -240,6 +276,33 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, spin_unlock(&rq->lock); } +void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) +{ + struct drm_sched_job *next_job; + struct drm_sched_rq *rq; + ktime_t ts; + + /* + * Update the entity's location in the min heap according to + * the timestamp of the next job, if any. + */ + next_job = drm_sched_entity_queue_peek(entity); + if (!next_job) + return; + + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) + ts = next_job->submit_ts; + else + ts = drm_sched_rq_get_rr_deadline(rq); + + spin_lock(&entity->lock); + rq = entity->rq; + spin_lock(&rq->lock); + drm_sched_rq_update_fifo_locked(entity, rq, ts); + spin_unlock(&rq->lock); + spin_unlock(&entity->lock); +} + /** * drm_sched_rq_select_entity - Select an entity which provides a job to run * From patchwork Mon Mar 31 20:16:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 07AC2C36018 for ; Mon, 31 Mar 2025 20:17:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6282310E491; Mon, 31 Mar 2025 20:17:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="G7wJgAis"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id AC0FD10E488; Mon, 31 Mar 2025 20:17:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=zSNbGTHH3Kvgrr0UORoi7GrbvCIFv9WqOPNOV9ZcuGU=; b=G7wJgAisfJyxD+uSGD6JhkDWh7 F5epjMdQ5BN7cnpvj80ZdxWy6yms2T/Ifr8IB1hXuMRPotpb1tVwXARM6u6QquEXMmp/Jis0OvKLL jRrs7UIT2bkzq9g4uIzuxCbUpkUrD4o2067r9jKN9dTU8tD2VpjBwXOsXK0/degeD4sNZEGK1BmxT 3dqwfuRoJnjbCP/4TksQzL2o9aPwcni4tugCQYBxdsO0XOx7/ftw4ixMk5fgdIJB1WpXdS8O5PodE 4zE1Cdmm7TDoaUS3tzjB1lTYpq5kaYmKA3JauN25rwJNFeQwW3GvDWsublObDfzy2ct3BSC9Fmgof SfzFN5IQ==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZb-009M3f-Vk; Mon, 31 Mar 2025 22:17:32 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 08/14] drm/sched: Move run queue related code into a separate file Date: Mon, 31 Mar 2025 21:16:59 +0100 Message-ID: <20250331201705.60663-9-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Lets move all the code dealing with struct drm_sched_rq into a separate compilation unit. Advantage being sched_main.c is left with a clearer set of responsibilities. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/Makefile | 2 +- drivers/gpu/drm/scheduler/sched_internal.h | 7 + drivers/gpu/drm/scheduler/sched_main.c | 213 +------------------- drivers/gpu/drm/scheduler/sched_rq.c | 217 +++++++++++++++++++++ 4 files changed, 227 insertions(+), 212 deletions(-) create mode 100644 drivers/gpu/drm/scheduler/sched_rq.c diff --git a/drivers/gpu/drm/scheduler/Makefile b/drivers/gpu/drm/scheduler/Makefile index 6e13e4c63e9d..74e75eff6df5 100644 --- a/drivers/gpu/drm/scheduler/Makefile +++ b/drivers/gpu/drm/scheduler/Makefile @@ -20,7 +20,7 @@ # OTHER DEALINGS IN THE SOFTWARE. # # -gpu-sched-y := sched_main.o sched_fence.o sched_entity.o +gpu-sched-y := sched_main.o sched_fence.o sched_entity.o sched_rq.o obj-$(CONFIG_DRM_SCHED) += gpu-sched.o diff --git a/drivers/gpu/drm/scheduler/sched_internal.h b/drivers/gpu/drm/scheduler/sched_internal.h index 8e7e477bace3..ee13a986b920 100644 --- a/drivers/gpu/drm/scheduler/sched_internal.h +++ b/drivers/gpu/drm/scheduler/sched_internal.h @@ -10,8 +10,15 @@ extern int drm_sched_policy; #define DRM_SCHED_POLICY_RR 0 #define DRM_SCHED_POLICY_FIFO 1 +bool drm_sched_can_queue(struct drm_gpu_scheduler *sched, + struct drm_sched_entity *entity); void drm_sched_wakeup(struct drm_gpu_scheduler *sched); +void drm_sched_rq_init(struct drm_gpu_scheduler *sched, + struct drm_sched_rq *rq); +struct drm_sched_entity * +drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, + struct drm_sched_rq *rq); struct drm_gpu_scheduler * drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts); void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 8736c7cd3ddd..f9c82db69300 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -117,8 +117,8 @@ static u32 drm_sched_available_credits(struct drm_gpu_scheduler *sched) * Return true if we can push at least one more job from @entity, false * otherwise. */ -static bool drm_sched_can_queue(struct drm_gpu_scheduler *sched, - struct drm_sched_entity *entity) +bool drm_sched_can_queue(struct drm_gpu_scheduler *sched, + struct drm_sched_entity *entity) { struct drm_sched_job *s_job; @@ -138,215 +138,6 @@ static bool drm_sched_can_queue(struct drm_gpu_scheduler *sched, return drm_sched_available_credits(sched) >= s_job->credits; } -static __always_inline bool drm_sched_entity_compare_before(struct rb_node *a, - const struct rb_node *b) -{ - struct drm_sched_entity *ent_a = rb_entry((a), struct drm_sched_entity, rb_tree_node); - struct drm_sched_entity *ent_b = rb_entry((b), struct drm_sched_entity, rb_tree_node); - - return ktime_before(ent_a->oldest_job_waiting, ent_b->oldest_job_waiting); -} - -static void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity *entity, - struct drm_sched_rq *rq) -{ - lockdep_assert_held(&entity->lock); - lockdep_assert_held(&rq->lock); - - if (!RB_EMPTY_NODE(&entity->rb_tree_node)) { - rb_erase_cached(&entity->rb_tree_node, &rq->rb_tree_root); - RB_CLEAR_NODE(&entity->rb_tree_node); - } -} - -static void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, - struct drm_sched_rq *rq, - ktime_t ts) -{ - /* - * Both locks need to be grabbed, one to protect from entity->rq change - * for entity from within concurrent drm_sched_entity_select_rq and the - * other to update the rb tree structure. - */ - lockdep_assert_held(&entity->lock); - lockdep_assert_held(&rq->lock); - - drm_sched_rq_remove_fifo_locked(entity, rq); - - entity->oldest_job_waiting = ts; - - rb_add_cached(&entity->rb_tree_node, &rq->rb_tree_root, - drm_sched_entity_compare_before); -} - -/** - * drm_sched_rq_init - initialize a given run queue struct - * - * @sched: scheduler instance to associate with this run queue - * @rq: scheduler run queue - * - * Initializes a scheduler runqueue. - */ -static void drm_sched_rq_init(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq) -{ - spin_lock_init(&rq->lock); - INIT_LIST_HEAD(&rq->entities); - rq->rb_tree_root = RB_ROOT_CACHED; - rq->sched = sched; -} - -static ktime_t -drm_sched_rq_get_rr_deadline(struct drm_sched_rq *rq) -{ - lockdep_assert_held(&rq->lock); - - rq->rr_deadline = ktime_add_ns(rq->rr_deadline, 1); - - return rq->rr_deadline; -} - -/** - * drm_sched_rq_add_entity - add an entity - * - * @entity: scheduler entity - * @ts: submission timestamp - * - * Adds a scheduler entity to the run queue. - * - * Returns a DRM scheduler pre-selected to handle this entity. - */ -struct drm_gpu_scheduler * -drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts) -{ - struct drm_gpu_scheduler *sched; - struct drm_sched_rq *rq; - - /* Add the entity to the run queue */ - spin_lock(&entity->lock); - if (entity->stopped) { - spin_unlock(&entity->lock); - - DRM_ERROR("Trying to push to a killed entity\n"); - return NULL; - } - - rq = entity->rq; - spin_lock(&rq->lock); - sched = rq->sched; - - if (list_empty(&entity->list)) { - atomic_inc(sched->score); - list_add_tail(&entity->list, &rq->entities); - } - - if (drm_sched_policy == DRM_SCHED_POLICY_RR) - ts = drm_sched_rq_get_rr_deadline(rq); - drm_sched_rq_update_fifo_locked(entity, rq, ts); - - spin_unlock(&rq->lock); - spin_unlock(&entity->lock); - - return sched; -} - -/** - * drm_sched_rq_remove_entity - remove an entity - * - * @rq: scheduler run queue - * @entity: scheduler entity - * - * Removes a scheduler entity from the run queue. - */ -void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, - struct drm_sched_entity *entity) -{ - lockdep_assert_held(&entity->lock); - - if (list_empty(&entity->list)) - return; - - spin_lock(&rq->lock); - - atomic_dec(rq->sched->score); - list_del_init(&entity->list); - - drm_sched_rq_remove_fifo_locked(entity, rq); - - spin_unlock(&rq->lock); -} - -void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) -{ - struct drm_sched_job *next_job; - struct drm_sched_rq *rq; - ktime_t ts; - - /* - * Update the entity's location in the min heap according to - * the timestamp of the next job, if any. - */ - next_job = drm_sched_entity_queue_peek(entity); - if (!next_job) - return; - - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) - ts = next_job->submit_ts; - else - ts = drm_sched_rq_get_rr_deadline(rq); - - spin_lock(&entity->lock); - rq = entity->rq; - spin_lock(&rq->lock); - drm_sched_rq_update_fifo_locked(entity, rq, ts); - spin_unlock(&rq->lock); - spin_unlock(&entity->lock); -} - -/** - * drm_sched_rq_select_entity - Select an entity which provides a job to run - * - * @sched: the gpu scheduler - * @rq: scheduler run queue to check. - * - * Find oldest waiting ready entity. - * - * Return an entity if one is found; return an error-pointer (!NULL) if an - * entity was ready, but the scheduler had insufficient credits to accommodate - * its job; return NULL, if no ready entity was found. - */ -static struct drm_sched_entity * -drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq) -{ - struct drm_sched_entity *entity = NULL; - struct rb_node *rb; - - spin_lock(&rq->lock); - for (rb = rb_first_cached(&rq->rb_tree_root); rb; rb = rb_next(rb)) { - entity = rb_entry(rb, struct drm_sched_entity, rb_tree_node); - if (drm_sched_entity_is_ready(entity)) - break; - else - entity = NULL; - } - spin_unlock(&rq->lock); - - if (!entity) - return NULL; - - /* - * If scheduler cannot take more jobs signal the caller to not consider - * lower priority queues. - */ - if (!drm_sched_can_queue(sched, entity)) - return ERR_PTR(-ENOSPC); - - reinit_completion(&entity->entity_idle); - - return entity; -} - /** * drm_sched_run_job_queue - enqueue run-job work * @sched: scheduler instance diff --git a/drivers/gpu/drm/scheduler/sched_rq.c b/drivers/gpu/drm/scheduler/sched_rq.c new file mode 100644 index 000000000000..a3104a4e5da7 --- /dev/null +++ b/drivers/gpu/drm/scheduler/sched_rq.c @@ -0,0 +1,217 @@ +#include + +#include +#include + +#include "sched_internal.h" + +static __always_inline bool +drm_sched_entity_compare_before(struct rb_node *a, const struct rb_node *b) +{ + struct drm_sched_entity *ea = + rb_entry((a), struct drm_sched_entity, rb_tree_node); + struct drm_sched_entity *eb = + rb_entry((b), struct drm_sched_entity, rb_tree_node); + + return ktime_before(ea->oldest_job_waiting, eb->oldest_job_waiting); +} + +static void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity *entity, + struct drm_sched_rq *rq) +{ + lockdep_assert_held(&entity->lock); + lockdep_assert_held(&rq->lock); + + if (!RB_EMPTY_NODE(&entity->rb_tree_node)) { + rb_erase_cached(&entity->rb_tree_node, &rq->rb_tree_root); + RB_CLEAR_NODE(&entity->rb_tree_node); + } +} + +static void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, + struct drm_sched_rq *rq, + ktime_t ts) +{ + /* + * Both locks need to be grabbed, one to protect from entity->rq change + * for entity from within concurrent drm_sched_entity_select_rq and the + * other to update the rb tree structure. + */ + lockdep_assert_held(&entity->lock); + lockdep_assert_held(&rq->lock); + + drm_sched_rq_remove_fifo_locked(entity, rq); + + entity->oldest_job_waiting = ts; + + rb_add_cached(&entity->rb_tree_node, &rq->rb_tree_root, + drm_sched_entity_compare_before); +} + +/** + * drm_sched_rq_init - initialize a given run queue struct + * + * @sched: scheduler instance to associate with this run queue + * @rq: scheduler run queue + * + * Initializes a scheduler runqueue. + */ +void drm_sched_rq_init(struct drm_gpu_scheduler *sched, + struct drm_sched_rq *rq) +{ + spin_lock_init(&rq->lock); + INIT_LIST_HEAD(&rq->entities); + rq->rb_tree_root = RB_ROOT_CACHED; + rq->sched = sched; +} + +static ktime_t +drm_sched_rq_get_rr_deadline(struct drm_sched_rq *rq) +{ + lockdep_assert_held(&rq->lock); + + rq->rr_deadline = ktime_add_ns(rq->rr_deadline, 1); + + return rq->rr_deadline; +} + +/** + * drm_sched_rq_add_entity - add an entity + * + * @entity: scheduler entity + * @ts: submission timestamp + * + * Adds a scheduler entity to the run queue. + * + * Returns a DRM scheduler pre-selected to handle this entity. + */ +struct drm_gpu_scheduler * +drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts) +{ + struct drm_gpu_scheduler *sched; + struct drm_sched_rq *rq; + + /* Add the entity to the run queue */ + spin_lock(&entity->lock); + if (entity->stopped) { + spin_unlock(&entity->lock); + + DRM_ERROR("Trying to push to a killed entity\n"); + return NULL; + } + + rq = entity->rq; + spin_lock(&rq->lock); + sched = rq->sched; + + if (list_empty(&entity->list)) { + atomic_inc(sched->score); + list_add_tail(&entity->list, &rq->entities); + } + + if (drm_sched_policy == DRM_SCHED_POLICY_RR) + ts = drm_sched_rq_get_rr_deadline(rq); + drm_sched_rq_update_fifo_locked(entity, rq, ts); + + spin_unlock(&rq->lock); + spin_unlock(&entity->lock); + + return sched; +} + +/** + * drm_sched_rq_remove_entity - remove an entity + * + * @rq: scheduler run queue + * @entity: scheduler entity + * + * Removes a scheduler entity from the run queue. + */ +void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, + struct drm_sched_entity *entity) +{ + lockdep_assert_held(&entity->lock); + + if (list_empty(&entity->list)) + return; + + spin_lock(&rq->lock); + + atomic_dec(rq->sched->score); + list_del_init(&entity->list); + + drm_sched_rq_remove_fifo_locked(entity, rq); + + spin_unlock(&rq->lock); +} + +void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) +{ + struct drm_sched_job *next_job; + struct drm_sched_rq *rq; + ktime_t ts; + + /* + * Update the entity's location in the min heap according to + * the timestamp of the next job, if any. + */ + next_job = drm_sched_entity_queue_peek(entity); + if (!next_job) + return; + + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) + ts = next_job->submit_ts; + else + ts = drm_sched_rq_get_rr_deadline(rq); + + spin_lock(&entity->lock); + rq = entity->rq; + spin_lock(&rq->lock); + drm_sched_rq_update_fifo_locked(entity, rq, ts); + spin_unlock(&rq->lock); + spin_unlock(&entity->lock); +} + +/** + * drm_sched_rq_select_entity - Select an entity which provides a job to run + * + * @sched: the gpu scheduler + * @rq: scheduler run queue to check. + * + * Find oldest waiting ready entity. + * + * Return an entity if one is found; return an error-pointer (!NULL) if an + * entity was ready, but the scheduler had insufficient credits to accommodate + * its job; return NULL, if no ready entity was found. + */ +struct drm_sched_entity * +drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, + struct drm_sched_rq *rq) +{ + struct drm_sched_entity *entity = NULL; + struct rb_node *rb; + + spin_lock(&rq->lock); + for (rb = rb_first_cached(&rq->rb_tree_root); rb; rb = rb_next(rb)) { + entity = rb_entry(rb, struct drm_sched_entity, rb_tree_node); + if (drm_sched_entity_is_ready(entity)) + break; + else + entity = NULL; + } + spin_unlock(&rq->lock); + + if (!entity) + return NULL; + + /* + * If scheduler cannot take more jobs signal the caller to not consider + * lower priority queues. + */ + if (!drm_sched_can_queue(sched, entity)) + return ERR_PTR(-ENOSPC); + + reinit_completion(&entity->entity_idle); + + return entity; +} From patchwork Mon Mar 31 20:17:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87FFAC36016 for ; Mon, 31 Mar 2025 20:17:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E0EB810E497; Mon, 31 Mar 2025 20:17:43 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="JX8Lvrds"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id 713A310E488; Mon, 31 Mar 2025 20:17:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=4OrOnvzXCJ9FEh+MVnJM5ujFXP71iWkDFpB/I8kP+1o=; b=JX8LvrdsUExvMbw/9SoxL3UcWj qLWIkPjFxGdFemg8gUHN0eUMI1MaIj9cFjgtqesPaz0USiF1afljfFtnDgUgRGSCBcM/4LrPYT8kz rgDF2ux+Z9gsWHF8oWcozR0szcbWCXUWUSh/aV5IW9s2J6rCTpvZhF7vTj27ML+9H0PR+xzP0aBbt pRSJAIzF0rc16bqsBi0HyfKcFW/GE+/Qfpv7YvtRcjKkHp7X9KTP8/PEZ8WtNtVddbwO9YezX1xGm LuxDst7PXQ0BCoO/9EIis/to5aiwbDUEF5RCUMdp2wK9wu2Jmevuemq+0ZduF+lsqGKGRptqo8a3P DPkXDmPA==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZc-009M3m-N8; Mon, 31 Mar 2025 22:17:32 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 09/14] drm/sched: Add deadline policy Date: Mon, 31 Mar 2025 21:17:00 +0100 Message-ID: <20250331201705.60663-10-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Deadline scheduling policy should be a fairer flavour of FIFO with two main advantages being that it can naturally connect with the dma-fence deadlines, and secondly that it can get away with multiple run queues per scheduler. From the latter comes the fairness advantage. Where the current FIFO policy will always starve low priority entities by normal, and normal by high etc, deadline tracks all runnable entities in a single run queue and assigns them deadlines based on priority. Instead of being ordered strictly by priority, jobs and entities become ordered by deadlines. This means that a later higher priority submission can still overtake an earlier lower priority one, but eventually the lower priority will get its turn even if high priority is constantly feeding new work. Current mapping of priority to deadlines is somewhat arbitrary and looks like this (submit timestamp plus constant offset in micro-seconds): static const unsigned int d_us[] = { [DRM_SCHED_PRIORITY_KERNEL] = 100, [DRM_SCHED_PRIORITY_HIGH] = 1000, [DRM_SCHED_PRIORITY_NORMAL] = 5000, [DRM_SCHED_PRIORITY_LOW] = 100000, }; Assuming simultaneous submission of one normal and one low prioriy job at a time of "t", they will get respective deadlines of t+5ms and t+100ms. Hence normal will run first and low will run after it, or at the latest 100ms after it was submitted in case other higher priority submissions overtake it in the meantime. Because deadline policy does not need run queues, if the FIFO and RR polices are later removed, that would allow for a significant simplification of the code base by reducing the 1:N to 1:1 scheduler to run queue relationship. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_entity.c | 53 ++++++++++++++++++---- drivers/gpu/drm/scheduler/sched_internal.h | 9 +++- drivers/gpu/drm/scheduler/sched_main.c | 14 ++++-- drivers/gpu/drm/scheduler/sched_rq.c | 4 +- include/drm/gpu_scheduler.h | 3 ++ 5 files changed, 65 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 8362184fe431..f4930b44f50d 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -70,6 +70,8 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, entity->guilty = guilty; entity->num_sched_list = num_sched_list; entity->priority = priority; + entity->rq_priority = drm_sched_policy == DRM_SCHED_POLICY_DEADLINE ? + DRM_SCHED_PRIORITY_KERNEL : priority; /* * It's perfectly valid to initialize an entity without having a valid * scheduler attached. It's just not valid to use the scheduler before it @@ -86,17 +88,23 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, */ pr_warn("%s: called with uninitialized scheduler\n", __func__); } else if (num_sched_list) { - /* The "priority" of an entity cannot exceed the number of run-queues of a - * scheduler. Protect against num_rqs being 0, by converting to signed. Choose - * the lowest priority available. + enum drm_sched_priority p = entity->priority; + + /* + * The "priority" of an entity cannot exceed the number of + * run-queues of a scheduler. Protect against num_rqs being 0, + * by converting to signed. Choose the lowest priority + * available. */ - if (entity->priority >= sched_list[0]->num_rqs) { - dev_err(sched_list[0]->dev, "entity has out-of-bounds priority: %u. num_rqs: %u\n", - entity->priority, sched_list[0]->num_rqs); - entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1, - (s32) DRM_SCHED_PRIORITY_KERNEL); + if (p >= sched_list[0]->num_user_rqs) { + dev_err(sched_list[0]->dev, "entity with out-of-bounds priority:%u num_user_rqs:%u\n", + p, sched_list[0]->num_user_rqs); + p = max_t(s32, + (s32)sched_list[0]->num_user_rqs - 1, + (s32)DRM_SCHED_PRIORITY_KERNEL); + entity->priority = p; } - entity->rq = sched_list[0]->sched_rq[entity->priority]; + entity->rq = sched_list[0]->sched_rq[entity->rq_priority]; } init_completion(&entity->entity_idle); @@ -398,6 +406,27 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, } EXPORT_SYMBOL(drm_sched_entity_set_priority); +static ktime_t +__drm_sched_entity_get_job_deadline(struct drm_sched_entity *entity, + ktime_t submit_ts) +{ + static const unsigned int d_us[] = { + [DRM_SCHED_PRIORITY_KERNEL] = 100, + [DRM_SCHED_PRIORITY_HIGH] = 1000, + [DRM_SCHED_PRIORITY_NORMAL] = 5000, + [DRM_SCHED_PRIORITY_LOW] = 100000, + }; + + return ktime_add_us(submit_ts, d_us[entity->priority]); +} + +ktime_t +drm_sched_entity_get_job_deadline(struct drm_sched_entity *entity, + struct drm_sched_job *job) +{ + return __drm_sched_entity_get_job_deadline(entity, job->submit_ts); +} + /* * Add a callback to the current dependency of the entity to wake up the * scheduler when the entity becomes available. @@ -543,7 +572,7 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) spin_lock(&entity->lock); sched = drm_sched_pick_best(entity->sched_list, entity->num_sched_list); - rq = sched ? sched->sched_rq[entity->priority] : NULL; + rq = sched ? sched->sched_rq[entity->rq_priority] : NULL; if (rq != entity->rq) { drm_sched_rq_remove_entity(entity->rq, entity); entity->rq = rq; @@ -585,6 +614,10 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) if (first) { struct drm_gpu_scheduler *sched; + if (drm_sched_policy == DRM_SCHED_POLICY_DEADLINE) + submit_ts = __drm_sched_entity_get_job_deadline(entity, + submit_ts); + sched = drm_sched_rq_add_entity(entity, submit_ts); if (sched) drm_sched_wakeup(sched); diff --git a/drivers/gpu/drm/scheduler/sched_internal.h b/drivers/gpu/drm/scheduler/sched_internal.h index ee13a986b920..a81bf25569cd 100644 --- a/drivers/gpu/drm/scheduler/sched_internal.h +++ b/drivers/gpu/drm/scheduler/sched_internal.h @@ -7,8 +7,9 @@ /* Used to choose between FIFO and RR job-scheduling */ extern int drm_sched_policy; -#define DRM_SCHED_POLICY_RR 0 -#define DRM_SCHED_POLICY_FIFO 1 +#define DRM_SCHED_POLICY_RR 0 +#define DRM_SCHED_POLICY_FIFO 1 +#define DRM_SCHED_POLICY_DEADLINE 2 bool drm_sched_can_queue(struct drm_gpu_scheduler *sched, struct drm_sched_entity *entity); @@ -38,6 +39,10 @@ void drm_sched_fence_scheduled(struct drm_sched_fence *fence, struct dma_fence *parent); void drm_sched_fence_finished(struct drm_sched_fence *fence, int result); + +ktime_t drm_sched_entity_get_job_deadline(struct drm_sched_entity *entity, + struct drm_sched_job *job); + /** * drm_sched_entity_queue_pop - Low level helper for popping queued jobs * diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index f9c82db69300..cfe6bc728271 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -89,13 +89,13 @@ static struct lockdep_map drm_sched_lockdep_map = { }; #endif -int drm_sched_policy = DRM_SCHED_POLICY_FIFO; +int drm_sched_policy = DRM_SCHED_POLICY_DEADLINE; /** * DOC: sched_policy (int) * Used to override default entities scheduling policy in a run queue. */ -MODULE_PARM_DESC(sched_policy, "Specify the scheduling policy for entities on a run-queue, " __stringify(DRM_SCHED_POLICY_RR) " = Round Robin, " __stringify(DRM_SCHED_POLICY_FIFO) " = FIFO (default)."); +MODULE_PARM_DESC(sched_policy, "Specify the scheduling policy for entities on a run-queue, " __stringify(DRM_SCHED_POLICY_RR) " = Round Robin, " __stringify(DRM_SCHED_POLICY_FIFO) " = FIFO, " __stringify(DRM_SCHED_POLICY_DEADLINE) " = Virtual deadline (default)."); module_param_named(sched_policy, drm_sched_policy, int, 0444); static u32 drm_sched_available_credits(struct drm_gpu_scheduler *sched) @@ -1085,11 +1085,15 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->own_submit_wq = true; } - sched->sched_rq = kmalloc_array(args->num_rqs, sizeof(*sched->sched_rq), + sched->num_user_rqs = args->num_rqs; + sched->num_rqs = drm_sched_policy != DRM_SCHED_POLICY_DEADLINE ? + args->num_rqs : 1; + sched->sched_rq = kmalloc_array(sched->num_rqs, + sizeof(*sched->sched_rq), GFP_KERNEL | __GFP_ZERO); if (!sched->sched_rq) goto Out_check_own; - sched->num_rqs = args->num_rqs; + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), GFP_KERNEL); if (!sched->sched_rq[i]) @@ -1204,7 +1208,7 @@ void drm_sched_increase_karma(struct drm_sched_job *bad) if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) { atomic_inc(&bad->karma); - for (i = DRM_SCHED_PRIORITY_HIGH; i < sched->num_rqs; i++) { + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { struct drm_sched_rq *rq = sched->sched_rq[i]; spin_lock(&rq->lock); diff --git a/drivers/gpu/drm/scheduler/sched_rq.c b/drivers/gpu/drm/scheduler/sched_rq.c index a3104a4e5da7..dc643f69da4d 100644 --- a/drivers/gpu/drm/scheduler/sched_rq.c +++ b/drivers/gpu/drm/scheduler/sched_rq.c @@ -159,7 +159,9 @@ void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) if (!next_job) return; - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) + if (drm_sched_policy == DRM_SCHED_POLICY_DEADLINE) + ts = drm_sched_entity_get_job_deadline(entity, next_job); + else if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) ts = next_job->submit_ts; else ts = drm_sched_rq_get_rr_deadline(rq); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 1073cc569cce..f0fbd95bb39b 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -147,6 +147,8 @@ struct drm_sched_entity { */ struct spsc_queue job_queue; + enum drm_sched_priority rq_priority; + /** * @fence_seq: * @@ -551,6 +553,7 @@ struct drm_gpu_scheduler { long timeout; const char *name; u32 num_rqs; + u32 num_user_rqs; struct drm_sched_rq **sched_rq; wait_queue_head_t job_scheduled; atomic64_t job_id_count; From patchwork Mon Mar 31 20:17:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5F3A5C36017 for ; Mon, 31 Mar 2025 20:18:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8813710E4A7; Mon, 31 Mar 2025 20:18:12 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="pLAyLA9W"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id 397B210E488; Mon, 31 Mar 2025 20:17:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=FQlBf1OK9ZAjInW9UebcgJTtfP/hw9X/N1b9B4bVXms=; b=pLAyLA9WyBGysnV1Nje0jsL7lI 3MLiErustzpnQXRdYij0Y+jf6oc4bOHMxL9g3ufjyeQ2GA/AcRI+dkgr8t3cQc9cW5zy4WLJJKRgF bGBVUYEPCr9EPsGtcBwS5PIy122T6gRiDeu5WcH6o0CyIX5yeA6A+ym47mdjPZfZAyHFU5mQOYXhL E3xwp6ioGxu2tmrQD7Y756bMepsk5PJiGvqfGNCwxqTpJoDL9CzIq7dv5UOw6acYx//25FFPlW2nk m6FUDuw4PvLVsAtuRsmKGZCpcDqGu6/YVAAadBlMyXlpZk2Emkzj/3mlFXs82gH9tgDM/XiQUsA1a 85GWo2Uw==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZd-009M3v-Em; Mon, 31 Mar 2025 22:17:33 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 10/14] drm/sched: Remove FIFO and RR and simplify to a single run queue Date: Mon, 31 Mar 2025 21:17:01 +0100 Message-ID: <20250331201705.60663-11-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" If the new deadline policy is at least as good as FIFO and we can afford to remove round-robin, we can simplify the scheduler code by making the scheduler to run queue relationship always 1:1 and remove some code. Also, now that the FIFO policy is gone the tree of entities is not a FIFO tree any more so rename it to just the tree. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 23 ++-- drivers/gpu/drm/scheduler/sched_entity.c | 30 +---- drivers/gpu/drm/scheduler/sched_internal.h | 7 -- drivers/gpu/drm/scheduler/sched_main.c | 133 +++++---------------- drivers/gpu/drm/scheduler/sched_rq.c | 32 ++--- include/drm/gpu_scheduler.h | 6 +- 6 files changed, 54 insertions(+), 177 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index acb21fc8b3ce..9440af58073b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -459,25 +459,22 @@ drm_sched_entity_queue_pop(struct drm_sched_entity *entity) void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) { + struct drm_sched_rq *rq = sched->rq; + struct drm_sched_entity *s_entity; struct drm_sched_job *s_job; - struct drm_sched_entity *s_entity = NULL; - int i; /* Signal all jobs not yet scheduled */ - for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { - struct drm_sched_rq *rq = sched->sched_rq[i]; - spin_lock(&rq->lock); - list_for_each_entry(s_entity, &rq->entities, list) { - while ((s_job = drm_sched_entity_queue_pop(s_entity))) { - struct drm_sched_fence *s_fence = s_job->s_fence; + spin_lock(&rq->lock); + list_for_each_entry(s_entity, &rq->entities, list) { + while ((s_job = drm_sched_entity_queue_pop(s_entity))) { + struct drm_sched_fence *s_fence = s_job->s_fence; - dma_fence_signal(&s_fence->scheduled); - dma_fence_set_error(&s_fence->finished, -EHWPOISON); - dma_fence_signal(&s_fence->finished); - } + dma_fence_signal(&s_fence->scheduled); + dma_fence_set_error(&s_fence->finished, -EHWPOISON); + dma_fence_signal(&s_fence->finished); } - spin_unlock(&rq->lock); } + spin_unlock(&rq->lock); /* Signal all jobs already scheduled to HW */ list_for_each_entry(s_job, &sched->pending_list, list) { diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f4930b44f50d..f61fec1cd155 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -70,8 +70,6 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, entity->guilty = guilty; entity->num_sched_list = num_sched_list; entity->priority = priority; - entity->rq_priority = drm_sched_policy == DRM_SCHED_POLICY_DEADLINE ? - DRM_SCHED_PRIORITY_KERNEL : priority; /* * It's perfectly valid to initialize an entity without having a valid * scheduler attached. It's just not valid to use the scheduler before it @@ -81,30 +79,14 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, RCU_INIT_POINTER(entity->last_scheduled, NULL); RB_CLEAR_NODE(&entity->rb_tree_node); - if (num_sched_list && !sched_list[0]->sched_rq) { + if (num_sched_list && !sched_list[0]->rq) { /* Since every entry covered by num_sched_list * should be non-NULL and therefore we warn drivers * not to do this and to fix their DRM calling order. */ pr_warn("%s: called with uninitialized scheduler\n", __func__); } else if (num_sched_list) { - enum drm_sched_priority p = entity->priority; - - /* - * The "priority" of an entity cannot exceed the number of - * run-queues of a scheduler. Protect against num_rqs being 0, - * by converting to signed. Choose the lowest priority - * available. - */ - if (p >= sched_list[0]->num_user_rqs) { - dev_err(sched_list[0]->dev, "entity with out-of-bounds priority:%u num_user_rqs:%u\n", - p, sched_list[0]->num_user_rqs); - p = max_t(s32, - (s32)sched_list[0]->num_user_rqs - 1, - (s32)DRM_SCHED_PRIORITY_KERNEL); - entity->priority = p; - } - entity->rq = sched_list[0]->sched_rq[entity->rq_priority]; + entity->rq = sched_list[0]->rq; } init_completion(&entity->entity_idle); @@ -572,7 +554,7 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) spin_lock(&entity->lock); sched = drm_sched_pick_best(entity->sched_list, entity->num_sched_list); - rq = sched ? sched->sched_rq[entity->rq_priority] : NULL; + rq = sched ? sched->rq : NULL; if (rq != entity->rq) { drm_sched_rq_remove_entity(entity->rq, entity); entity->rq = rq; @@ -614,10 +596,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) if (first) { struct drm_gpu_scheduler *sched; - if (drm_sched_policy == DRM_SCHED_POLICY_DEADLINE) - submit_ts = __drm_sched_entity_get_job_deadline(entity, - submit_ts); - + submit_ts = __drm_sched_entity_get_job_deadline(entity, + submit_ts); sched = drm_sched_rq_add_entity(entity, submit_ts); if (sched) drm_sched_wakeup(sched); diff --git a/drivers/gpu/drm/scheduler/sched_internal.h b/drivers/gpu/drm/scheduler/sched_internal.h index a81bf25569cd..fc0f05ce06af 100644 --- a/drivers/gpu/drm/scheduler/sched_internal.h +++ b/drivers/gpu/drm/scheduler/sched_internal.h @@ -4,13 +4,6 @@ #define _DRM_GPU_SCHEDULER_INTERNAL_H_ -/* Used to choose between FIFO and RR job-scheduling */ -extern int drm_sched_policy; - -#define DRM_SCHED_POLICY_RR 0 -#define DRM_SCHED_POLICY_FIFO 1 -#define DRM_SCHED_POLICY_DEADLINE 2 - bool drm_sched_can_queue(struct drm_gpu_scheduler *sched, struct drm_sched_entity *entity); void drm_sched_wakeup(struct drm_gpu_scheduler *sched); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index cfe6bc728271..b35450c45e7b 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -89,15 +89,6 @@ static struct lockdep_map drm_sched_lockdep_map = { }; #endif -int drm_sched_policy = DRM_SCHED_POLICY_DEADLINE; - -/** - * DOC: sched_policy (int) - * Used to override default entities scheduling policy in a run queue. - */ -MODULE_PARM_DESC(sched_policy, "Specify the scheduling policy for entities on a run-queue, " __stringify(DRM_SCHED_POLICY_RR) " = Round Robin, " __stringify(DRM_SCHED_POLICY_FIFO) " = FIFO, " __stringify(DRM_SCHED_POLICY_DEADLINE) " = Virtual deadline (default)."); -module_param_named(sched_policy, drm_sched_policy, int, 0444); - static u32 drm_sched_available_credits(struct drm_gpu_scheduler *sched) { u32 credits; @@ -839,34 +830,6 @@ void drm_sched_wakeup(struct drm_gpu_scheduler *sched) drm_sched_run_job_queue(sched); } -/** - * drm_sched_select_entity - Select next entity to process - * - * @sched: scheduler instance - * - * Return an entity to process or NULL if none are found. - * - * Note, that we break out of the for-loop when "entity" is non-null, which can - * also be an error-pointer--this assures we don't process lower priority - * run-queues. See comments in the respectively called functions. - */ -static struct drm_sched_entity * -drm_sched_select_entity(struct drm_gpu_scheduler *sched) -{ - struct drm_sched_entity *entity = NULL; - int i; - - /* Start with the highest priority. - */ - for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { - entity = drm_sched_rq_select_entity(sched, sched->sched_rq[i]); - if (entity) - break; - } - - return IS_ERR(entity) ? NULL : entity; -} - /** * drm_sched_get_finished_job - fetch the next finished job to be destroyed * @@ -989,8 +952,8 @@ static void drm_sched_run_job_work(struct work_struct *w) int r; /* Find entity with a ready job */ - entity = drm_sched_select_entity(sched); - if (!entity) + entity = drm_sched_rq_select_entity(sched, sched->rq); + if (IS_ERR_OR_NULL(entity)) return; /* No more work */ sched_job = drm_sched_entity_pop_job(entity); @@ -1042,8 +1005,6 @@ static void drm_sched_run_job_work(struct work_struct *w) */ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_args *args) { - int i; - sched->ops = args->ops; sched->credit_limit = args->credit_limit; sched->name = args->name; @@ -1053,13 +1014,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->score = args->score ? args->score : &sched->_score; sched->dev = args->dev; - if (args->num_rqs > DRM_SCHED_PRIORITY_COUNT) { - /* This is a gross violation--tell drivers what the problem is. - */ - dev_err(sched->dev, "%s: num_rqs cannot be greater than DRM_SCHED_PRIORITY_COUNT\n", - __func__); - return -EINVAL; - } else if (sched->sched_rq) { + if (sched->rq) { /* Not an error, but warn anyway so drivers can * fine-tune their DRM calling order, and return all * is good. @@ -1085,21 +1040,11 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->own_submit_wq = true; } - sched->num_user_rqs = args->num_rqs; - sched->num_rqs = drm_sched_policy != DRM_SCHED_POLICY_DEADLINE ? - args->num_rqs : 1; - sched->sched_rq = kmalloc_array(sched->num_rqs, - sizeof(*sched->sched_rq), - GFP_KERNEL | __GFP_ZERO); - if (!sched->sched_rq) + sched->rq = kmalloc(sizeof(*sched->rq), GFP_KERNEL | __GFP_ZERO); + if (!sched->rq) goto Out_check_own; - for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { - sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), GFP_KERNEL); - if (!sched->sched_rq[i]) - goto Out_unroll; - drm_sched_rq_init(sched, sched->sched_rq[i]); - } + drm_sched_rq_init(sched, sched->rq); init_waitqueue_head(&sched->job_scheduled); INIT_LIST_HEAD(&sched->pending_list); @@ -1114,12 +1059,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->ready = true; return 0; -Out_unroll: - for (--i ; i >= DRM_SCHED_PRIORITY_KERNEL; i--) - kfree(sched->sched_rq[i]); - kfree(sched->sched_rq); - sched->sched_rq = NULL; Out_check_own: if (sched->own_submit_wq) destroy_workqueue(sched->submit_wq); @@ -1151,25 +1091,21 @@ EXPORT_SYMBOL(drm_sched_init); */ void drm_sched_fini(struct drm_gpu_scheduler *sched) { + + struct drm_sched_rq *rq = sched->rq; struct drm_sched_entity *s_entity; - int i; drm_sched_wqueue_stop(sched); - for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { - struct drm_sched_rq *rq = sched->sched_rq[i]; - - spin_lock(&rq->lock); - list_for_each_entry(s_entity, &rq->entities, list) - /* - * Prevents reinsertion and marks job_queue as idle, - * it will be removed from the rq in drm_sched_entity_fini() - * eventually - */ - s_entity->stopped = true; - spin_unlock(&rq->lock); - kfree(sched->sched_rq[i]); - } + spin_lock(&rq->lock); + list_for_each_entry(s_entity, &rq->entities, list) + /* + * Prevents reinsertion and marks job_queue as idle, + * it will be removed from the rq in drm_sched_entity_fini() + * eventually + */ + s_entity->stopped = true; + spin_unlock(&rq->lock); /* Wakeup everyone stuck in drm_sched_entity_flush for this scheduler */ wake_up_all(&sched->job_scheduled); @@ -1180,8 +1116,8 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) if (sched->own_submit_wq) destroy_workqueue(sched->submit_wq); sched->ready = false; - kfree(sched->sched_rq); - sched->sched_rq = NULL; + kfree(sched->rq); + sched->rq = NULL; } EXPORT_SYMBOL(drm_sched_fini); @@ -1196,35 +1132,28 @@ EXPORT_SYMBOL(drm_sched_fini); */ void drm_sched_increase_karma(struct drm_sched_job *bad) { - int i; - struct drm_sched_entity *tmp; - struct drm_sched_entity *entity; struct drm_gpu_scheduler *sched = bad->sched; + struct drm_sched_entity *entity, *tmp; + struct drm_sched_rq *rq = sched->rq; /* don't change @bad's karma if it's from KERNEL RQ, * because sometimes GPU hang would cause kernel jobs (like VM updating jobs) * corrupt but keep in mind that kernel jobs always considered good. */ - if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) { - atomic_inc(&bad->karma); + if (bad->s_priority == DRM_SCHED_PRIORITY_KERNEL) + return; - for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { - struct drm_sched_rq *rq = sched->sched_rq[i]; + atomic_inc(&bad->karma); - spin_lock(&rq->lock); - list_for_each_entry_safe(entity, tmp, &rq->entities, list) { - if (bad->s_fence->scheduled.context == - entity->fence_context) { - if (entity->guilty) - atomic_set(entity->guilty, 1); - break; - } - } - spin_unlock(&rq->lock); - if (&entity->list != &rq->entities) - break; + spin_lock(&rq->lock); + list_for_each_entry_safe(entity, tmp, &rq->entities, list) { + if (bad->s_fence->scheduled.context == entity->fence_context) { + if (entity->guilty) + atomic_set(entity->guilty, 1); + break; } } + spin_unlock(&rq->lock); } EXPORT_SYMBOL(drm_sched_increase_karma); diff --git a/drivers/gpu/drm/scheduler/sched_rq.c b/drivers/gpu/drm/scheduler/sched_rq.c index dc643f69da4d..21ee96a37895 100644 --- a/drivers/gpu/drm/scheduler/sched_rq.c +++ b/drivers/gpu/drm/scheduler/sched_rq.c @@ -16,7 +16,7 @@ drm_sched_entity_compare_before(struct rb_node *a, const struct rb_node *b) return ktime_before(ea->oldest_job_waiting, eb->oldest_job_waiting); } -static void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity *entity, +static void drm_sched_rq_remove_tree_locked(struct drm_sched_entity *entity, struct drm_sched_rq *rq) { lockdep_assert_held(&entity->lock); @@ -28,7 +28,7 @@ static void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity *entity, } } -static void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, +static void drm_sched_rq_update_tree_locked(struct drm_sched_entity *entity, struct drm_sched_rq *rq, ktime_t ts) { @@ -40,7 +40,7 @@ static void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, lockdep_assert_held(&entity->lock); lockdep_assert_held(&rq->lock); - drm_sched_rq_remove_fifo_locked(entity, rq); + drm_sched_rq_remove_tree_locked(entity, rq); entity->oldest_job_waiting = ts; @@ -65,16 +65,6 @@ void drm_sched_rq_init(struct drm_gpu_scheduler *sched, rq->sched = sched; } -static ktime_t -drm_sched_rq_get_rr_deadline(struct drm_sched_rq *rq) -{ - lockdep_assert_held(&rq->lock); - - rq->rr_deadline = ktime_add_ns(rq->rr_deadline, 1); - - return rq->rr_deadline; -} - /** * drm_sched_rq_add_entity - add an entity * @@ -109,9 +99,7 @@ drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts) list_add_tail(&entity->list, &rq->entities); } - if (drm_sched_policy == DRM_SCHED_POLICY_RR) - ts = drm_sched_rq_get_rr_deadline(rq); - drm_sched_rq_update_fifo_locked(entity, rq, ts); + drm_sched_rq_update_tree_locked(entity, rq, ts); spin_unlock(&rq->lock); spin_unlock(&entity->lock); @@ -140,7 +128,7 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, atomic_dec(rq->sched->score); list_del_init(&entity->list); - drm_sched_rq_remove_fifo_locked(entity, rq); + drm_sched_rq_remove_tree_locked(entity, rq); spin_unlock(&rq->lock); } @@ -159,17 +147,11 @@ void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) if (!next_job) return; - if (drm_sched_policy == DRM_SCHED_POLICY_DEADLINE) - ts = drm_sched_entity_get_job_deadline(entity, next_job); - else if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) - ts = next_job->submit_ts; - else - ts = drm_sched_rq_get_rr_deadline(rq); - + ts = drm_sched_entity_get_job_deadline(entity, next_job); spin_lock(&entity->lock); rq = entity->rq; spin_lock(&rq->lock); - drm_sched_rq_update_fifo_locked(entity, rq, ts); + drm_sched_rq_update_tree_locked(entity, rq, ts); spin_unlock(&rq->lock); spin_unlock(&entity->lock); } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index f0fbd95bb39b..cd2a119f6da1 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -147,8 +147,6 @@ struct drm_sched_entity { */ struct spsc_queue job_queue; - enum drm_sched_priority rq_priority; - /** * @fence_seq: * @@ -552,9 +550,7 @@ struct drm_gpu_scheduler { atomic_t credit_count; long timeout; const char *name; - u32 num_rqs; - u32 num_user_rqs; - struct drm_sched_rq **sched_rq; + struct drm_sched_rq *rq; wait_queue_head_t job_scheduled; atomic64_t job_id_count; struct workqueue_struct *submit_wq; From patchwork Mon Mar 31 20:17:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034090 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF09FC3601B for ; Mon, 31 Mar 2025 20:17:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CFCFC10E493; Mon, 31 Mar 2025 20:17:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="iV190+93"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id E001810E488; Mon, 31 Mar 2025 20:17:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=UN8uS2rpcpUOHt1KpjEPD6+h7CnxA/QDfUGO4PwAjV4=; b=iV190+93m92Hl6KTSlfgCA8UxN Qrj8GTOPFmduHjfNQrk1VQKFxXiU8PERsAIYhuvNDOP5Ra5kqLqPHNDlgCx7k5wvQVe3voZBbWcAt 7Ux9fc8nJR6mp7lbsVoeWhCSvlsTecB3JPqQgyrm8qjxbYp9dNVZ5Dpu6tLrPHnauwTTt0Jh6QLR/ GkW5MdN7T52Ri/9tFWWhmwQ7OHbCri9iK/tw7PNTEg4pHJ0qC8Da++5nxI+H3O/NcK+cYKR+k/dio aQPnXWwrVFs+VrZWUmKadWKxKfw8sCFpuvWr/NVCIG935XiBQZHK4M4fhRO6fLufnP5HgstgRGCUa jOE5ntfg==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZe-009M44-6b; Mon, 31 Mar 2025 22:17:34 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 11/14] drm/sched: Queue all free credits in one worker invocation Date: Mon, 31 Mar 2025 21:17:02 +0100 Message-ID: <20250331201705.60663-12-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" There is no reason to queue just a single job if scheduler can take more and re-queue the worker to queue more. We can simply feed the hardware with as much as it can take in one go and hopefully win some latency. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_internal.h | 2 - drivers/gpu/drm/scheduler/sched_main.c | 127 ++++++++++----------- drivers/gpu/drm/scheduler/sched_rq.c | 17 +-- 3 files changed, 61 insertions(+), 85 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_internal.h b/drivers/gpu/drm/scheduler/sched_internal.h index fc0f05ce06af..4b3fc4a098bb 100644 --- a/drivers/gpu/drm/scheduler/sched_internal.h +++ b/drivers/gpu/drm/scheduler/sched_internal.h @@ -4,8 +4,6 @@ #define _DRM_GPU_SCHEDULER_INTERNAL_H_ -bool drm_sched_can_queue(struct drm_gpu_scheduler *sched, - struct drm_sched_entity *entity); void drm_sched_wakeup(struct drm_gpu_scheduler *sched); void drm_sched_rq_init(struct drm_gpu_scheduler *sched, diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index b35450c45e7b..8aaa6a0dcc70 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -100,35 +100,6 @@ static u32 drm_sched_available_credits(struct drm_gpu_scheduler *sched) return credits; } -/** - * drm_sched_can_queue -- Can we queue more to the hardware? - * @sched: scheduler instance - * @entity: the scheduler entity - * - * Return true if we can push at least one more job from @entity, false - * otherwise. - */ -bool drm_sched_can_queue(struct drm_gpu_scheduler *sched, - struct drm_sched_entity *entity) -{ - struct drm_sched_job *s_job; - - s_job = drm_sched_entity_queue_peek(entity); - if (!s_job) - return false; - - /* If a job exceeds the credit limit, truncate it to the credit limit - * itself to guarantee forward progress. - */ - if (s_job->credits > sched->credit_limit) { - dev_WARN(sched->dev, - "Jobs may not exceed the credit limit, truncate.\n"); - s_job->credits = sched->credit_limit; - } - - return drm_sched_available_credits(sched) >= s_job->credits; -} - /** * drm_sched_run_job_queue - enqueue run-job work * @sched: scheduler instance @@ -945,54 +916,72 @@ static void drm_sched_run_job_work(struct work_struct *w) { struct drm_gpu_scheduler *sched = container_of(w, struct drm_gpu_scheduler, work_run_job); + u32 job_credits, submitted_credits = 0; struct drm_sched_entity *entity; - struct dma_fence *fence; struct drm_sched_fence *s_fence; struct drm_sched_job *sched_job; - int r; + struct dma_fence *fence; - /* Find entity with a ready job */ - entity = drm_sched_rq_select_entity(sched, sched->rq); - if (IS_ERR_OR_NULL(entity)) - return; /* No more work */ + while (!READ_ONCE(sched->pause_submit)) { + /* Find entity with a ready job */ + entity = drm_sched_rq_select_entity(sched, sched->rq); + if (!entity) + break; /* No more work */ + + /* + * If a job exceeds the credit limit truncate it to guarantee + * forward progress. + */ + sched_job = drm_sched_entity_queue_peek(entity); + job_credits = sched_job->credits; + if (dev_WARN_ONCE(sched->dev, job_credits > sched->credit_limit, + "Jobs may not exceed the credit limit, truncating.\n")) + job_credits = sched_job->credits = sched->credit_limit; + + if (job_credits > drm_sched_available_credits(sched)) { + complete_all(&entity->entity_idle); + break; + } + + sched_job = drm_sched_entity_pop_job(entity); + if (!sched_job) { + /* Top entity is not yet runnable after all */ + complete_all(&entity->entity_idle); + continue; + } + + s_fence = sched_job->s_fence; + drm_sched_job_begin(sched_job); + trace_drm_run_job(sched_job, entity); + submitted_credits += job_credits; + atomic_add(job_credits, &sched->credit_count); + + fence = sched->ops->run_job(sched_job); + drm_sched_fence_scheduled(s_fence, fence); + + if (!IS_ERR_OR_NULL(fence)) { + int r; + + /* Drop for original kref_init of the fence */ + dma_fence_put(fence); + + r = dma_fence_add_callback(fence, &sched_job->cb, + drm_sched_job_done_cb); + if (r == -ENOENT) + drm_sched_job_done(sched_job, fence->error); + else if (r) + DRM_DEV_ERROR(sched->dev, + "fence add callback failed (%d)\n", r); + } else { + drm_sched_job_done(sched_job, IS_ERR(fence) ? + PTR_ERR(fence) : 0); + } - sched_job = drm_sched_entity_pop_job(entity); - if (!sched_job) { complete_all(&entity->entity_idle); - drm_sched_run_job_queue(sched); - return; } - s_fence = sched_job->s_fence; - - atomic_add(sched_job->credits, &sched->credit_count); - drm_sched_job_begin(sched_job); - - trace_drm_run_job(sched_job, entity); - /* - * The run_job() callback must by definition return a fence whose - * refcount has been incremented for the scheduler already. - */ - fence = sched->ops->run_job(sched_job); - complete_all(&entity->entity_idle); - drm_sched_fence_scheduled(s_fence, fence); - - if (!IS_ERR_OR_NULL(fence)) { - r = dma_fence_add_callback(fence, &sched_job->cb, - drm_sched_job_done_cb); - if (r == -ENOENT) - drm_sched_job_done(sched_job, fence->error); - else if (r) - DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r); - - dma_fence_put(fence); - } else { - drm_sched_job_done(sched_job, IS_ERR(fence) ? - PTR_ERR(fence) : 0); - } - - wake_up(&sched->job_scheduled); - drm_sched_run_job_queue(sched); + if (submitted_credits) + wake_up(&sched->job_scheduled); } /** diff --git a/drivers/gpu/drm/scheduler/sched_rq.c b/drivers/gpu/drm/scheduler/sched_rq.c index 21ee96a37895..735bcb194c03 100644 --- a/drivers/gpu/drm/scheduler/sched_rq.c +++ b/drivers/gpu/drm/scheduler/sched_rq.c @@ -164,9 +164,7 @@ void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) * * Find oldest waiting ready entity. * - * Return an entity if one is found; return an error-pointer (!NULL) if an - * entity was ready, but the scheduler had insufficient credits to accommodate - * its job; return NULL, if no ready entity was found. + * Return an entity if one is found or NULL if no ready entity was found. */ struct drm_sched_entity * drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, @@ -185,17 +183,8 @@ drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, } spin_unlock(&rq->lock); - if (!entity) - return NULL; - - /* - * If scheduler cannot take more jobs signal the caller to not consider - * lower priority queues. - */ - if (!drm_sched_can_queue(sched, entity)) - return ERR_PTR(-ENOSPC); - - reinit_completion(&entity->entity_idle); + if (entity) + reinit_completion(&entity->entity_idle); return entity; } From patchwork Mon Mar 31 20:17:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14F79C36014 for ; Mon, 31 Mar 2025 20:18:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5F78510E4A6; Mon, 31 Mar 2025 20:18:12 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="OiVTAjmx"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id A714910E48C; Mon, 31 Mar 2025 20:17:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=ceKaUV/FxdqKMnIqH0Vy8k/ncnnX8NLn+J50U/AxWlY=; b=OiVTAjmxHYpCUuHVmSsMHFgkVg v+Wwgl6FDRiRRN/wyl/I7zNgqojtInoIqfDG1yg8oqmUceztz1M6xcXS9/ziNR7jBxzTkfE3gFKhz RD9iLIDem1dszdFUSOwVKrssKyH0gIEOIZFBcHtJYfTGNsHhVOpUIaZQaBhPDh0Zfr+xr0SLerQZd SkiNUL5K+aJeZScs7mQFFUjVmPV6TcsoDfIuQNSYxnoh/ReOkItGV+vq5otQtiPM26sAWadnbLTTj 18BW903LvV4Il2mt1LArLa0cYMZlVpKeksBxpZWMqSztVm97DiUy29ToNb4VfHkFMrV/nU+2Q29VO xotiB2FA==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZe-009M4C-Uv; Mon, 31 Mar 2025 22:17:35 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 12/14] drm/sched: Embed run queue singleton into the scheduler Date: Mon, 31 Mar 2025 21:17:03 +0100 Message-ID: <20250331201705.60663-13-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that the run queue to scheduler relationship is always 1:1 we can embed it (the run queue) directly in the scheduler struct and save on some allocation error handling code and such. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 6 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 6 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 5 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h | 8 +++-- drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 8 ++--- drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c | 8 ++--- drivers/gpu/drm/scheduler/sched_entity.c | 34 +++++++++------------ drivers/gpu/drm/scheduler/sched_fence.c | 2 +- drivers/gpu/drm/scheduler/sched_internal.h | 6 ++-- drivers/gpu/drm/scheduler/sched_main.c | 31 +++---------------- drivers/gpu/drm/scheduler/sched_rq.c | 18 +++++------ include/drm/gpu_scheduler.h | 5 +-- 12 files changed, 59 insertions(+), 78 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 82df06a72ee0..e18e180bf32c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1108,7 +1108,8 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) if (p->gang_size > 1 && !adev->vm_manager.concurrent_flush) { for (i = 0; i < p->gang_size; ++i) { struct drm_sched_entity *entity = p->entities[i]; - struct drm_gpu_scheduler *sched = entity->rq->sched; + struct drm_gpu_scheduler *sched = + container_of(entity->rq, typeof(*sched), rq); struct amdgpu_ring *ring = to_amdgpu_ring(sched); if (amdgpu_vmid_uses_reserved(vm, ring->vm_hub)) @@ -1236,7 +1237,8 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p) return r; } - sched = p->gang_leader->base.entity->rq->sched; + sched = container_of(p->gang_leader->base.entity->rq, typeof(*sched), + rq); while ((fence = amdgpu_sync_get_fence(&p->sync))) { struct drm_sched_fence *s_fence = to_drm_sched_fence(fence); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index 9440af58073b..e3d4f7503738 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -359,7 +359,9 @@ static struct dma_fence * amdgpu_job_prepare_job(struct drm_sched_job *sched_job, struct drm_sched_entity *s_entity) { - struct amdgpu_ring *ring = to_amdgpu_ring(s_entity->rq->sched); + struct drm_gpu_scheduler *sched = + container_of(s_entity->rq, typeof(*sched), rq); + struct amdgpu_ring *ring = to_amdgpu_ring(sched); struct amdgpu_job *job = to_amdgpu_job(sched_job); struct dma_fence *fence; int r; @@ -459,7 +461,7 @@ drm_sched_entity_queue_pop(struct drm_sched_entity *entity) void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) { - struct drm_sched_rq *rq = sched->rq; + struct drm_sched_rq *rq = &sched->rq; struct drm_sched_entity *s_entity; struct drm_sched_job *s_job; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h index ce6b9ba967ff..d6872baeba1e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h @@ -85,7 +85,10 @@ struct amdgpu_job { static inline struct amdgpu_ring *amdgpu_job_ring(struct amdgpu_job *job) { - return to_amdgpu_ring(job->base.entity->rq->sched); + struct drm_gpu_scheduler *sched = + container_of(job->base.entity->rq, typeof(*sched), rq); + + return to_amdgpu_ring(sched); } int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h index 11dd2e0f7979..197d20a37afb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h @@ -145,6 +145,7 @@ TRACE_EVENT(amdgpu_cs, struct amdgpu_ib *ib), TP_ARGS(p, job, ib), TP_STRUCT__entry( + __field(struct drm_gpu_scheduler *, sched) __field(struct amdgpu_bo_list *, bo_list) __field(u32, ring) __field(u32, dw) @@ -152,11 +153,14 @@ TRACE_EVENT(amdgpu_cs, ), TP_fast_assign( + __entry->sched = container_of(job->base.entity->rq, + typeof(*__entry->sched), + rq); __entry->bo_list = p->bo_list; - __entry->ring = to_amdgpu_ring(job->base.entity->rq->sched)->idx; + __entry->ring = to_amdgpu_ring(__entry->sched)->idx; __entry->dw = ib->length_dw; __entry->fences = amdgpu_fence_count_emitted( - to_amdgpu_ring(job->base.entity->rq->sched)); + to_amdgpu_ring(__entry->sched)); ), TP_printk("bo_list=%p, ring=%u, dw=%u, fences=%u", __entry->bo_list, __entry->ring, __entry->dw, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c index 46d9fb433ab2..42f2bfb30af1 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c @@ -105,13 +105,13 @@ static int amdgpu_vm_sdma_prepare(struct amdgpu_vm_update_params *p, static int amdgpu_vm_sdma_commit(struct amdgpu_vm_update_params *p, struct dma_fence **fence) { + struct drm_gpu_scheduler *sched = + container_of(p->vm->delayed.rq, typeof(*sched), rq); + struct amdgpu_ring *ring = + container_of(sched, struct amdgpu_ring, sched); struct amdgpu_ib *ib = p->job->ibs; - struct amdgpu_ring *ring; struct dma_fence *f; - ring = container_of(p->vm->delayed.rq->sched, struct amdgpu_ring, - sched); - WARN_ON(ib->length_dw == 0); amdgpu_ring_pad_ib(ring, ib); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c index 23b6f7a4aa4a..ab132dae8183 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c @@ -420,15 +420,15 @@ int amdgpu_xcp_open_device(struct amdgpu_device *adev, void amdgpu_xcp_release_sched(struct amdgpu_device *adev, struct amdgpu_ctx_entity *entity) { - struct drm_gpu_scheduler *sched; - struct amdgpu_ring *ring; + struct drm_gpu_scheduler *sched = + container_of(entity->entity.rq, typeof(*sched), rq); if (!adev->xcp_mgr) return; - sched = entity->entity.rq->sched; if (drm_sched_wqueue_ready(sched)) { - ring = to_amdgpu_ring(entity->entity.rq->sched); + struct amdgpu_ring *ring = to_amdgpu_ring(sched); + atomic_dec(&adev->xcp_mgr->xcp[ring->xcp_id].ref_cnt); } } diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f61fec1cd155..c6ed0d1642f3 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -76,19 +76,12 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, * is initialized itself. */ entity->sched_list = num_sched_list > 1 ? sched_list : NULL; + if (num_sched_list) { + entity->sched_list = num_sched_list > 1 ? sched_list : NULL; + entity->rq = &sched_list[0]->rq; + } RCU_INIT_POINTER(entity->last_scheduled, NULL); RB_CLEAR_NODE(&entity->rb_tree_node); - - if (num_sched_list && !sched_list[0]->rq) { - /* Since every entry covered by num_sched_list - * should be non-NULL and therefore we warn drivers - * not to do this and to fix their DRM calling order. - */ - pr_warn("%s: called with uninitialized scheduler\n", __func__); - } else if (num_sched_list) { - entity->rq = sched_list[0]->rq; - } - init_completion(&entity->entity_idle); /* We start in an idle state. */ @@ -275,7 +268,7 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout) if (!entity->rq) return 0; - sched = entity->rq->sched; + sched = container_of(entity->rq, typeof(*sched), rq); /** * The client will not queue more IBs during this fini, consume existing * queued IBs or discard them on SIGKILL @@ -366,9 +359,11 @@ static void drm_sched_entity_wakeup(struct dma_fence *f, { struct drm_sched_entity *entity = container_of(cb, struct drm_sched_entity, cb); + struct drm_gpu_scheduler *sched = + container_of(entity->rq, typeof(*sched), rq); drm_sched_entity_clear_dep(f, cb); - drm_sched_wakeup(entity->rq->sched); + drm_sched_wakeup(sched); } /** @@ -415,7 +410,8 @@ drm_sched_entity_get_job_deadline(struct drm_sched_entity *entity, */ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) { - struct drm_gpu_scheduler *sched = entity->rq->sched; + struct drm_gpu_scheduler *sched = + container_of(entity->rq, typeof(*sched), rq); struct dma_fence *fence = entity->dependency; struct drm_sched_fence *s_fence; @@ -554,7 +550,7 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) spin_lock(&entity->lock); sched = drm_sched_pick_best(entity->sched_list, entity->num_sched_list); - rq = sched ? sched->rq : NULL; + rq = sched ? &sched->rq : NULL; if (rq != entity->rq) { drm_sched_rq_remove_entity(entity->rq, entity); entity->rq = rq; @@ -577,11 +573,13 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) void drm_sched_entity_push_job(struct drm_sched_job *sched_job) { struct drm_sched_entity *entity = sched_job->entity; - bool first; + struct drm_gpu_scheduler *sched = + container_of(entity->rq, typeof(*sched), rq); ktime_t submit_ts; + bool first; trace_drm_sched_job(sched_job, entity); - atomic_inc(entity->rq->sched->score); + atomic_inc(sched->score); WRITE_ONCE(entity->last_user, current->group_leader); /* @@ -594,8 +592,6 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) /* first job wakes up scheduler */ if (first) { - struct drm_gpu_scheduler *sched; - submit_ts = __drm_sched_entity_get_job_deadline(entity, submit_ts); sched = drm_sched_rq_add_entity(entity, submit_ts); diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index e971528504a5..bb48e690862d 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -225,7 +225,7 @@ void drm_sched_fence_init(struct drm_sched_fence *fence, { unsigned seq; - fence->sched = entity->rq->sched; + fence->sched = container_of(entity->rq, typeof(*fence->sched), rq); seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, &fence->lock, entity->fence_context, seq); diff --git a/drivers/gpu/drm/scheduler/sched_internal.h b/drivers/gpu/drm/scheduler/sched_internal.h index 4b3fc4a098bb..f50e54bfaccc 100644 --- a/drivers/gpu/drm/scheduler/sched_internal.h +++ b/drivers/gpu/drm/scheduler/sched_internal.h @@ -6,11 +6,9 @@ void drm_sched_wakeup(struct drm_gpu_scheduler *sched); -void drm_sched_rq_init(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq); +void drm_sched_rq_init(struct drm_gpu_scheduler *sched); struct drm_sched_entity * -drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq); +drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched); struct drm_gpu_scheduler * drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts); void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 8aaa6a0dcc70..4c52953a01a3 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -578,7 +578,7 @@ void drm_sched_job_arm(struct drm_sched_job *job) BUG_ON(!entity); drm_sched_entity_select_rq(entity); - sched = entity->rq->sched; + sched = container_of(entity->rq, typeof(*sched), rq); job->sched = sched; job->s_priority = entity->priority; @@ -924,7 +924,7 @@ static void drm_sched_run_job_work(struct work_struct *w) while (!READ_ONCE(sched->pause_submit)) { /* Find entity with a ready job */ - entity = drm_sched_rq_select_entity(sched, sched->rq); + entity = drm_sched_rq_select_entity(sched); if (!entity) break; /* No more work */ @@ -1003,15 +1003,6 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->score = args->score ? args->score : &sched->_score; sched->dev = args->dev; - if (sched->rq) { - /* Not an error, but warn anyway so drivers can - * fine-tune their DRM calling order, and return all - * is good. - */ - dev_warn(sched->dev, "%s: scheduler already initialized!\n", __func__); - return 0; - } - if (args->submit_wq) { sched->submit_wq = args->submit_wq; sched->own_submit_wq = false; @@ -1029,11 +1020,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->own_submit_wq = true; } - sched->rq = kmalloc(sizeof(*sched->rq), GFP_KERNEL | __GFP_ZERO); - if (!sched->rq) - goto Out_check_own; - - drm_sched_rq_init(sched, sched->rq); + drm_sched_rq_init(sched); init_waitqueue_head(&sched->job_scheduled); INIT_LIST_HEAD(&sched->pending_list); @@ -1048,12 +1035,6 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->ready = true; return 0; - -Out_check_own: - if (sched->own_submit_wq) - destroy_workqueue(sched->submit_wq); - dev_err(sched->dev, "%s: Failed to setup GPU scheduler--out of memory\n", __func__); - return -ENOMEM; } EXPORT_SYMBOL(drm_sched_init); @@ -1081,7 +1062,7 @@ EXPORT_SYMBOL(drm_sched_init); void drm_sched_fini(struct drm_gpu_scheduler *sched) { - struct drm_sched_rq *rq = sched->rq; + struct drm_sched_rq *rq = &sched->rq; struct drm_sched_entity *s_entity; drm_sched_wqueue_stop(sched); @@ -1105,8 +1086,6 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) if (sched->own_submit_wq) destroy_workqueue(sched->submit_wq); sched->ready = false; - kfree(sched->rq); - sched->rq = NULL; } EXPORT_SYMBOL(drm_sched_fini); @@ -1123,7 +1102,7 @@ void drm_sched_increase_karma(struct drm_sched_job *bad) { struct drm_gpu_scheduler *sched = bad->sched; struct drm_sched_entity *entity, *tmp; - struct drm_sched_rq *rq = sched->rq; + struct drm_sched_rq *rq = &sched->rq; /* don't change @bad's karma if it's from KERNEL RQ, * because sometimes GPU hang would cause kernel jobs (like VM updating jobs) diff --git a/drivers/gpu/drm/scheduler/sched_rq.c b/drivers/gpu/drm/scheduler/sched_rq.c index 735bcb194c03..4b142a4c89d1 100644 --- a/drivers/gpu/drm/scheduler/sched_rq.c +++ b/drivers/gpu/drm/scheduler/sched_rq.c @@ -52,17 +52,16 @@ static void drm_sched_rq_update_tree_locked(struct drm_sched_entity *entity, * drm_sched_rq_init - initialize a given run queue struct * * @sched: scheduler instance to associate with this run queue - * @rq: scheduler run queue * * Initializes a scheduler runqueue. */ -void drm_sched_rq_init(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq) +void drm_sched_rq_init(struct drm_gpu_scheduler *sched) { + struct drm_sched_rq *rq = &sched->rq; + spin_lock_init(&rq->lock); INIT_LIST_HEAD(&rq->entities); rq->rb_tree_root = RB_ROOT_CACHED; - rq->sched = sched; } /** @@ -91,8 +90,8 @@ drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts) } rq = entity->rq; + sched = container_of(rq, typeof(*sched), rq); spin_lock(&rq->lock); - sched = rq->sched; if (list_empty(&entity->list)) { atomic_inc(sched->score); @@ -118,6 +117,8 @@ drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts) void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, struct drm_sched_entity *entity) { + struct drm_gpu_scheduler *sched = container_of(rq, typeof(*sched), rq); + lockdep_assert_held(&entity->lock); if (list_empty(&entity->list)) @@ -125,7 +126,7 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, spin_lock(&rq->lock); - atomic_dec(rq->sched->score); + atomic_dec(sched->score); list_del_init(&entity->list); drm_sched_rq_remove_tree_locked(entity, rq); @@ -160,17 +161,16 @@ void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) * drm_sched_rq_select_entity - Select an entity which provides a job to run * * @sched: the gpu scheduler - * @rq: scheduler run queue to check. * * Find oldest waiting ready entity. * * Return an entity if one is found or NULL if no ready entity was found. */ struct drm_sched_entity * -drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq) +drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched) { struct drm_sched_entity *entity = NULL; + struct drm_sched_rq *rq = &sched->rq; struct rb_node *rb; spin_lock(&rq->lock); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index cd2a119f6da1..1e1dd16a0d9a 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -238,7 +238,6 @@ struct drm_sched_entity { /** * struct drm_sched_rq - queue of entities to be scheduled. * - * @sched: the scheduler to which this rq belongs to. * @lock: protects @entities, @rb_tree_root and @rr_deadline. * @entities: list of the entities to be scheduled. * @rb_tree_root: root of time based priority queue of entities for FIFO scheduling @@ -248,8 +247,6 @@ struct drm_sched_entity { * the next entity to emit commands from. */ struct drm_sched_rq { - struct drm_gpu_scheduler *sched; - spinlock_t lock; /* Following members are protected by the @lock: */ ktime_t rr_deadline; @@ -550,7 +547,7 @@ struct drm_gpu_scheduler { atomic_t credit_count; long timeout; const char *name; - struct drm_sched_rq *rq; + struct drm_sched_rq rq; wait_queue_head_t job_scheduled; atomic64_t job_id_count; struct workqueue_struct *submit_wq; From patchwork Mon Mar 31 20:17:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD0ACC36016 for ; Mon, 31 Mar 2025 20:18:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B907B10E4A9; Mon, 31 Mar 2025 20:18:12 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="f2Jids0d"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6257D10E48C; Mon, 31 Mar 2025 20:17:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=A6avTzRhrfMhmIEmpLBmQgSjLYOVxJlrzMrRqL/r4Kc=; b=f2Jids0d/1EDkwlBA1O0jVpV8h Gvn3yb5oxpm9/J+2Z17RijHZ91dXdHldQZ7lMF1ZD5mh/w63KvNVJYbPMm33zC2PE5jsE+5FO0QGV XkCDuABcJ+ueR1KY2Jyhl5avRPy7gJD5RjaMkUgszs4jYMBtV/1XYvjW719AyFt9KW3V8RZWN54n4 HDKBUDOW9qHoIgjEdO9fzq3e1R2mka8vb/PJN7AXwD8qVdJkQOWdPR3C42oGKhiDhyRS9t1Ck5PLQ 0hwMNAaLMilFzv7RZnnn6ztJ34iO0Y8BRUXlMzi5CRWWKz1T/itLvg6+IUYdhzbuXpEfXFDs6I1l1 PswsmNtg==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZf-009M4M-M3; Mon, 31 Mar 2025 22:17:35 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC v3 13/14] drm/sched: De-clutter drm_sched_init Date: Mon, 31 Mar 2025 21:17:04 +0100 Message-ID: <20250331201705.60663-14-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Move work queue allocation into a helper for a more streamlined function body. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_main.c | 28 +++++++++++++------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 4c52953a01a3..76eee43547a9 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -83,12 +83,6 @@ #define CREATE_TRACE_POINTS #include "gpu_scheduler_trace.h" -#ifdef CONFIG_LOCKDEP -static struct lockdep_map drm_sched_lockdep_map = { - .name = "drm_sched_lockdep_map" -}; -#endif - static u32 drm_sched_available_credits(struct drm_gpu_scheduler *sched) { u32 credits; @@ -984,6 +978,19 @@ static void drm_sched_run_job_work(struct work_struct *w) wake_up(&sched->job_scheduled); } +static struct workqueue_struct *drm_sched_alloc_wq(const char *name) +{ +#if (IS_ENABLED(CONFIG_LOCKDEP)) + static struct lockdep_map map = { + .name = "drm_sched_lockdep_map" + }; + + return alloc_ordered_workqueue_lockdep_map(name, WQ_MEM_RECLAIM, &map); +#else + return alloc_ordered_workqueue(name, WQ_MEM_RECLAIM); +#endif +} + /** * drm_sched_init - Init a gpu scheduler instance * @@ -1007,16 +1014,9 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->submit_wq = args->submit_wq; sched->own_submit_wq = false; } else { -#ifdef CONFIG_LOCKDEP - sched->submit_wq = alloc_ordered_workqueue_lockdep_map(args->name, - WQ_MEM_RECLAIM, - &drm_sched_lockdep_map); -#else - sched->submit_wq = alloc_ordered_workqueue(args->name, WQ_MEM_RECLAIM); -#endif + sched->submit_wq = drm_sched_alloc_wq(args->name); if (!sched->submit_wq) return -ENOMEM; - sched->own_submit_wq = true; } From patchwork Mon Mar 31 20:17:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 14034091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B50A9C36014 for ; Mon, 31 Mar 2025 20:17:46 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 585AD10E496; Mon, 31 Mar 2025 20:17:43 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="ImWyEFwo"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2F17510E48C; Mon, 31 Mar 2025 20:17:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=GmzJIEigtZr3d7cCzWfe4NXz1hyj/J50h/gdffd7K/A=; b=ImWyEFwoIOchdfZofPiayA5OON h4hmFcS0S9Ma2aHyD65nWCXyoam8Bvzhq7kAb2dauzkxoa/ZjquWoV1iZIuYihgyebpnQeG+4Nt7A ThMXHEeORYgCf76t0xkh8lJ7In9JMiMBq9clNNuIboAFyEGIG5zf+mXbQmUUQkoak7GaTg9xJp/sP XjCw5NCRgmm1wfqQna7oD0i6acS37dUqMvy8MuRpgbU61BqjHB14gTR0ZzAVPcbAD4kuQ+Lymn6+k /+bWeNp4t6TO5AdaOr9qP/Dx4XmxGzUbUTY6tC6H2hRmXw94VHi9Lh0TbJa5vZNheiKWdqNlUZy7N MdJz+mSg==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tzLZg-009M4S-EF; Mon, 31 Mar 2025 22:17:36 +0200 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner , Pierre-Eric Pelloux-Prayer Subject: [RFC v3 14/14] drm/sched: Scale deadlines depending on queue depth Date: Mon, 31 Mar 2025 21:17:05 +0100 Message-ID: <20250331201705.60663-15-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> References: <20250331201705.60663-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" So far deadline based scheduling was able to remove the need for separate run queues (per priority) and alleviate the starvation issues hampering FIFO when somewhat reasonable clients are concerned. Because the deadline implementation is however still based on the submission time as its baseline criteria, since the current DRM scheduler design makes it difficult to perhaps consider job (or entity) "runnable" timestamp as an alternative, it shares the same weakness as FIFO with clients which rapidly submit deep job queues. In those cases deadline scheduler will be similarly unfair as FIFO is. One simple approach to somewhat alleviate that and apply some fairness is to scale the relative deadlines by client queue depth. Apart from queue depth scaling is based on client priority, where kernel submissions are aggresively pulled in, while userspace priority levels are pushed out proportionately to the decrease in priority. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner Cc: Pierre-Eric Pelloux-Prayer --- drivers/gpu/drm/scheduler/sched_entity.c | 39 ++++++++++++---------- drivers/gpu/drm/scheduler/sched_internal.h | 4 --- drivers/gpu/drm/scheduler/sched_rq.c | 4 +-- include/drm/gpu_scheduler.h | 6 ++-- 4 files changed, 25 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index c6ed0d1642f3..98be867dcf41 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -387,21 +387,25 @@ static ktime_t __drm_sched_entity_get_job_deadline(struct drm_sched_entity *entity, ktime_t submit_ts) { - static const unsigned int d_us[] = { - [DRM_SCHED_PRIORITY_KERNEL] = 100, - [DRM_SCHED_PRIORITY_HIGH] = 1000, - [DRM_SCHED_PRIORITY_NORMAL] = 5000, - [DRM_SCHED_PRIORITY_LOW] = 100000, + static const long d_us[] = { + [DRM_SCHED_PRIORITY_KERNEL] = -1000, + [DRM_SCHED_PRIORITY_HIGH] = 1000, + [DRM_SCHED_PRIORITY_NORMAL] = 2500, + [DRM_SCHED_PRIORITY_LOW] = 10000, }; + static const unsigned int shift[] = { + [DRM_SCHED_PRIORITY_KERNEL] = 4, + [DRM_SCHED_PRIORITY_HIGH] = 0, + [DRM_SCHED_PRIORITY_NORMAL] = 1, + [DRM_SCHED_PRIORITY_LOW] = 2, + }; + const unsigned int prio = entity->priority; + long d; - return ktime_add_us(submit_ts, d_us[entity->priority]); -} + d = d_us[prio] * + ((spsc_queue_count(&entity->job_queue) + 1) << shift[prio]); -ktime_t -drm_sched_entity_get_job_deadline(struct drm_sched_entity *entity, - struct drm_sched_job *job) -{ - return __drm_sched_entity_get_job_deadline(entity, job->submit_ts); + return ktime_add_us(submit_ts, d); } /* @@ -575,7 +579,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) struct drm_sched_entity *entity = sched_job->entity; struct drm_gpu_scheduler *sched = container_of(entity->rq, typeof(*sched), rq); - ktime_t submit_ts; + ktime_t deadline_ts; bool first; trace_drm_sched_job(sched_job, entity); @@ -585,16 +589,15 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) /* * After the sched_job is pushed into the entity queue, it may be * completed and freed up at any time. We can no longer access it. - * Make sure to set the submit_ts first, to avoid a race. + * Make sure to set the deadline_ts first, to avoid a race. */ - sched_job->submit_ts = submit_ts = ktime_get(); + sched_job->deadline_ts = deadline_ts = + __drm_sched_entity_get_job_deadline(entity, ktime_get()); first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); /* first job wakes up scheduler */ if (first) { - submit_ts = __drm_sched_entity_get_job_deadline(entity, - submit_ts); - sched = drm_sched_rq_add_entity(entity, submit_ts); + sched = drm_sched_rq_add_entity(entity, deadline_ts); if (sched) drm_sched_wakeup(sched); } diff --git a/drivers/gpu/drm/scheduler/sched_internal.h b/drivers/gpu/drm/scheduler/sched_internal.h index f50e54bfaccc..3d6e853e87b6 100644 --- a/drivers/gpu/drm/scheduler/sched_internal.h +++ b/drivers/gpu/drm/scheduler/sched_internal.h @@ -28,10 +28,6 @@ void drm_sched_fence_scheduled(struct drm_sched_fence *fence, struct dma_fence *parent); void drm_sched_fence_finished(struct drm_sched_fence *fence, int result); - -ktime_t drm_sched_entity_get_job_deadline(struct drm_sched_entity *entity, - struct drm_sched_job *job); - /** * drm_sched_entity_queue_pop - Low level helper for popping queued jobs * diff --git a/drivers/gpu/drm/scheduler/sched_rq.c b/drivers/gpu/drm/scheduler/sched_rq.c index 4b142a4c89d1..ffec9691d5a7 100644 --- a/drivers/gpu/drm/scheduler/sched_rq.c +++ b/drivers/gpu/drm/scheduler/sched_rq.c @@ -138,7 +138,6 @@ void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) { struct drm_sched_job *next_job; struct drm_sched_rq *rq; - ktime_t ts; /* * Update the entity's location in the min heap according to @@ -148,11 +147,10 @@ void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) if (!next_job) return; - ts = drm_sched_entity_get_job_deadline(entity, next_job); spin_lock(&entity->lock); rq = entity->rq; spin_lock(&rq->lock); - drm_sched_rq_update_tree_locked(entity, rq, ts); + drm_sched_rq_update_tree_locked(entity, rq, next_job->deadline_ts); spin_unlock(&rq->lock); spin_unlock(&entity->lock); } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 1e1dd16a0d9a..e0c3d84dd8b1 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -331,11 +331,11 @@ struct drm_sched_job { u64 id; /** - * @submit_ts: + * @deadline_ts: * - * When the job was pushed into the entity queue. + * Job deadline set at push time. */ - ktime_t submit_ts; + ktime_t deadline_ts; /** * @sched: