From patchwork Wed Nov 25 03:17:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Luben Tuikov X-Patchwork-Id: 11930237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCF2AC63798 for ; Wed, 25 Nov 2020 03:17:29 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 597AB208B8 for ; Wed, 25 Nov 2020 03:17:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="EMiCQjmS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 597AB208B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 857546E7FE; Wed, 25 Nov 2020 03:17:26 +0000 (UTC) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2E4A86E7F5; Wed, 25 Nov 2020 03:17:25 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EM4QqtQG1tyKJCZi4qTDrTLHPp0bLNlBZldUcviurIjmPxR/uF//6bo4Z5fF+q3/Pc+pm7IV9iWgEnk/LsBVuJOHwc2t7y9JPtGW3LycXw4uIYYzvbuZaC1hUtT7Kv5PQYKi4AFdJPzuIz99Q3q9EdpQhVKPmBBUaQa6+/C/C7CypA9xAmb3PqWSeM9Hhw0PgdDgxscPcsDOQsGm6VqhcHKDWh2Bn0kIoaq/TXT+0OApTWBNvLYCmzdekwFrO+Bi6rKulqAD7hUSJ+gDeQYl2Y06TTiLHdtsK8+wnCbpeP36SDHGQUykLR9YhPa+9H3igalc8SICRdSwHtuiLbHIsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LMoIIn28HpYcPNzWwPg6G7QuqY4i0mfIVLLXEgtrijI=; b=lelFQXUo6XISl6/rXau9bNT+HYGV6z4wM8NROnf2ruiFIlMvWXkg3YDFqHg8YyaP04TFG58zfFcDQyDUu1BcEgr7Uw8Z033gUT9h8L8R61qvYpWcfqUhiDQesVm/zkTRUx+4osP5Nnt3xVtYnwUPYactRyZS5czUk8uOeNWIaanUNfQ32eCHxRmiljH+9bpqa3LO3si870wYIQd6IQAfwD4EzWU7qaODB2zBiHb2+PZwvPTn9Wuk3W0WDejfbhr1wJ9ziprKPqR26WQbR3EV9PeL9Yc2TCBIrtulcNk51ruAStfRmOvk7ztMaODwkayPq0ezIixReI17MiA7Ptn+fg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LMoIIn28HpYcPNzWwPg6G7QuqY4i0mfIVLLXEgtrijI=; b=EMiCQjmSkiiu5/lMeG79gX1LSIeZNdLPDSiVB5fxyEkNlj27UaOHfGa7G18DXY7gCvfWcbyTsV6+QqaFKEzwJNOG+2WlwjLK2i1FHtSt8xoEVQ2cKpRP5J1SLza/Wf5bIjCRE1uPFj2lxacoedSblDLiN4i6JpUSum6Mf8wOdTg= Authentication-Results: amd.com; dkim=none (message not signed) header.d=none;amd.com; dmarc=none action=none header.from=amd.com; Received: from DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) by DM5PR1201MB2507.namprd12.prod.outlook.com (2603:10b6:3:e9::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Wed, 25 Nov 2020 03:17:22 +0000 Received: from DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56]) by DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56%6]) with mapi id 15.20.3589.031; Wed, 25 Nov 2020 03:17:22 +0000 From: Luben Tuikov To: Andrey Grodzovsky , =?utf-8?q?Christian_K?= =?utf-8?q?=C3=B6nig?= , Lucas Stach , Alexander Deucher Subject: [PATCH 1/6] drm/scheduler: "node" --> "list" Date: Tue, 24 Nov 2020 22:17:03 -0500 Message-Id: <20201125031708.6433-2-luben.tuikov@amd.com> X-Mailer: git-send-email 2.29.2.154.g7f7ebe054a In-Reply-To: <20201125031708.6433-1-luben.tuikov@amd.com> References: <769e72ee-b2d0-d75f-cc83-a85be08e231b@amd.com> <20201125031708.6433-1-luben.tuikov@amd.com> X-Originating-IP: [165.204.55.250] X-ClientProxiedBy: CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) To DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain.amd.com (165.204.55.250) by CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend Transport; Wed, 25 Nov 2020 03:17:21 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: dd69c73b-0d9f-4a59-09ce-08d890f09fd3 X-MS-TrafficTypeDiagnostic: DM5PR1201MB2507: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1247; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hVQZXiSkL5v4UnhBZDyFEUGI7yNU+q82m5UkO8Y4gdVmcX7XaH20veVbrz/UxIEUsRjocZaxuLbY/WjsZxgzoHEhkayqWhWg9d57YsiyCTlKsEXYoyNBGLuyshvOd116utmtMkJzWNQ1Kr1pxINoNFZfFnVVLtDsrKrHHEv6k3eGxbVij/iurlA1jmJ6pHDofO42dUeao51+Jrz0efanxNnlwRg9TB2OS8r4vKHV/paXIjVuHPd6UiH2asK366x1HeYUZ0TjhP+S4X7dPuRn7ZFRg3Bo3zhCeGiNypPI0ZXM7erd1Na+fQthXAiI+Hz7o2a3pJZw5Kydv9LIX3eR5w== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3962.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(7696005)(6486002)(54906003)(478600001)(26005)(52116002)(8936002)(83380400001)(8676002)(86362001)(2616005)(6636002)(5660300002)(956004)(1076003)(4326008)(2906002)(44832011)(186003)(66946007)(110136005)(316002)(36756003)(66476007)(16526019)(6666004)(66556008); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: uZe3PIVhBcXI2n9ezWcLxLbFxtB+Ty38+SunLgdbmrQNovXNag8d8D3C/XVbjtoe59AdIvG4YRJjffddyuZxk7OjHtKupUzo20chc/LIVAwStVKCmdXsH5ElNTERSN6JGusAxDBl0xbQEIC5nsXZj95aKA+M8dJvouzz5sqFVlamjsmX9k06lYW5p7ZV+7vGn1c3K1+FLkgcELQ4+mWzxy2sacjiJUi0sGfcDCb1WC85EJqldgxshR1W8BjBV7y64lkiEJ9jo4T2RIErOCVeZVKt1KRNJIO7YY4NRjFFbRu0eQIORQxYksdH0EOjyIJE2zF2MtfL5LeWSiAIx3U33eulkXoFdgD/fnTq5oTsoHUEPksYTi6gOYDrJKewWiHYhHmOh5jBLGKrwZeVn5TflNi9TFXGKU8kuOSjzmAXQ9Tc2KMIR+C6bcZVxc1smd2Dxk9Ko1nM00OoE7wXxHeQsbLl0iFprtOn4xNjTO8D5kSQvg6C9jY0lWQqHi8jmK+lQyZATdXMt5V/gav969aUsy6C1f0fBA7QD4INkOnVwyrLAqoumbT4d17eRIrkHHDTd85ntwFuOe4ZLSuMS8UY6bze+GndzcklSEpf5kLQgX6dn/vP6Mpty4t7kQBI3Pusgzt2DCgbm+/u7D6wM+jr+huXB/Y0GLMHeaCyw6nOYa07aD8npkSLVM8gR5yYLUhb+miCUeHgtwtQqdF+qkCKzPWTQ1CQuSo2ls90bpmYdCnoshMIKn2P5FiKJznc6rmfIVaskn54nhckQPmqFzCgJGCoMRAcpHGi55gb5DU8OWd5vt2bL8rJJEZKOS1FBaNq7BBaJRPEwvdwG1P+kow38SqKJqqkP0bafjaH6wH02KwSCwtCTrjsi19gsjltmTTsDTDenApfy/Y9jfyE0uiBHw== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: dd69c73b-0d9f-4a59-09ce-08d890f09fd3 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3962.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 03:17:22.5201 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 0S08ijiWFYAexTxkEs2A0PHVA1lRlZGKoDJekuqnkzxbeBurpj8kNd4OWA2Zzajt X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB2507 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Emily Deng , Luben Tuikov , amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, steven.price@arm.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rename "node" to "list" in struct drm_sched_job, in order to make it consistent with what we see being used throughout gpu_scheduler.h, for instance in struct drm_sched_entity, as well as the rest of DRM and the kernel. Signed-off-by: Luben Tuikov Reviewed-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 23 +++++++++++---------- include/drm/gpu_scheduler.h | 4 ++-- 5 files changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c index 5c1f3725c741..8358cae0b5a4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c @@ -1427,7 +1427,7 @@ static void amdgpu_ib_preempt_job_recovery(struct drm_gpu_scheduler *sched) struct dma_fence *fence; spin_lock(&sched->job_list_lock); - list_for_each_entry(s_job, &sched->ring_mirror_list, node) { + list_for_each_entry(s_job, &sched->ring_mirror_list, list) { fence = sched->ops->run_job(s_job); dma_fence_put(fence); } @@ -1459,10 +1459,10 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring) no_preempt: spin_lock(&sched->job_list_lock); - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { + list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { if (dma_fence_is_signaled(&s_job->s_fence->finished)) { /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); + list_del_init(&s_job->list); sched->ops->free_job(s_job); continue; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 7560b05e4ac1..4df6de81cd41 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -4128,7 +4128,7 @@ bool amdgpu_device_has_job_running(struct amdgpu_device *adev) spin_lock(&ring->sched.job_list_lock); job = list_first_entry_or_null(&ring->sched.ring_mirror_list, - struct drm_sched_job, node); + struct drm_sched_job, list); spin_unlock(&ring->sched.job_list_lock); if (job) return true; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index dcfe8a3b03ff..aca52a46b93d 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -271,7 +271,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) } /* Signal all jobs already scheduled to HW */ - list_for_each_entry(s_job, &sched->ring_mirror_list, node) { + list_for_each_entry(s_job, &sched->ring_mirror_list, list) { struct drm_sched_fence *s_fence = s_job->s_fence; dma_fence_set_error(&s_fence->finished, -EHWPOISON); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index c6332d75025e..c52eba407ebd 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -272,7 +272,7 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job) struct drm_gpu_scheduler *sched = s_job->sched; spin_lock(&sched->job_list_lock); - list_add_tail(&s_job->node, &sched->ring_mirror_list); + list_add_tail(&s_job->list, &sched->ring_mirror_list); drm_sched_start_timeout(sched); spin_unlock(&sched->job_list_lock); } @@ -287,7 +287,7 @@ static void drm_sched_job_timedout(struct work_struct *work) /* Protects against concurrent deletion in drm_sched_get_cleanup_job */ spin_lock(&sched->job_list_lock); job = list_first_entry_or_null(&sched->ring_mirror_list, - struct drm_sched_job, node); + struct drm_sched_job, list); if (job) { /* @@ -295,7 +295,7 @@ static void drm_sched_job_timedout(struct work_struct *work) * drm_sched_cleanup_jobs. It will be reinserted back after sched->thread * is parked at which point it's safe. */ - list_del_init(&job->node); + list_del_init(&job->list); spin_unlock(&sched->job_list_lock); job->sched->ops->timedout_job(job); @@ -392,7 +392,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) * Add at the head of the queue to reflect it was the earliest * job extracted. */ - list_add(&bad->node, &sched->ring_mirror_list); + list_add(&bad->list, &sched->ring_mirror_list); /* * Iterate the job list from later to earlier one and either deactive @@ -400,7 +400,8 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) * signaled. * This iteration is thread safe as sched thread is stopped. */ - list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, node) { + list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, + list) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, &s_job->cb)) { @@ -411,7 +412,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) * Locking here is for concurrent resume timeout */ spin_lock(&sched->job_list_lock); - list_del_init(&s_job->node); + list_del_init(&s_job->list); spin_unlock(&sched->job_list_lock); /* @@ -462,7 +463,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) * so no new jobs are being inserted or removed. Also concurrent * GPU recovers can't run in parallel. */ - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { + list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { struct dma_fence *fence = s_job->s_fence->parent; atomic_inc(&sched->hw_rq_count); @@ -505,7 +506,7 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) bool found_guilty = false; struct dma_fence *fence; - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { + list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { struct drm_sched_fence *s_fence = s_job->s_fence; if (!found_guilty && atomic_read(&s_job->karma) > sched->hang_limit) { @@ -565,7 +566,7 @@ int drm_sched_job_init(struct drm_sched_job *job, return -ENOMEM; job->id = atomic64_inc_return(&sched->job_id_count); - INIT_LIST_HEAD(&job->node); + INIT_LIST_HEAD(&job->list); return 0; } @@ -684,11 +685,11 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) spin_lock(&sched->job_list_lock); job = list_first_entry_or_null(&sched->ring_mirror_list, - struct drm_sched_job, node); + struct drm_sched_job, list); if (job && dma_fence_is_signaled(&job->s_fence->finished)) { /* remove job from ring_mirror_list */ - list_del_init(&job->node); + list_del_init(&job->list); } else { job = NULL; /* queue timeout for next job */ diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 92436553fd6a..3add0072bd37 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -189,14 +189,14 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); */ struct drm_sched_job { struct spsc_node queue_node; + struct list_head list; struct drm_gpu_scheduler *sched; struct drm_sched_fence *s_fence; struct dma_fence_cb finish_cb; - struct list_head node; uint64_t id; atomic_t karma; enum drm_sched_priority s_priority; - struct drm_sched_entity *entity; + struct drm_sched_entity *entity; struct dma_fence_cb cb; }; From patchwork Wed Nov 25 03:17:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luben Tuikov X-Patchwork-Id: 11930241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D1BEC6379D for ; Wed, 25 Nov 2020 03:17:38 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9C9E820782 for ; Wed, 25 Nov 2020 03:17:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="Z4VOgHsy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9C9E820782 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6584A6E81A; Wed, 25 Nov 2020 03:17:27 +0000 (UTC) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6567A6E7EC; Wed, 25 Nov 2020 03:17:25 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Hs5x11SJBUKQwJu2kc9/QRqYrnB8F2YTGZ3fPLRZqgbrrXd1r016SjBV6i4jAUjZuoUnKcxLv6rjWH0LhNYzzSh32nd5vz2Qbk8x4QZLi3dOvNT5TgrWS9OP6JpANyXcQREfWDcaGbtGx1/1Y4jQTtDw/6ECtNX8mOGFmoeTRyFAR726yU4dCOo5cUOxe1eWPN6crf5fwGntmubTyRFROCJdRDpOrOD4Va1zhyhY0xl7ZRCPDQixyQzmVMQqdWTNBsbIijeQKWpVSEX1/GRQvZBY0Qw7aR0bToWcz4AklXxgt87un0xFytWVHumButd612o+sxVk8kuGsdwG4X/C/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OAVgGrfQPitmpq1mXMjQQshOAt8uvvM/13xR2RCLf9U=; b=Lyv0fxKItLNyzh5Lz4Eg0PDIDkJHxsoeZNqgsAVtgWcYEo6+2/KrwMujrRz6tH/LwIy+6NA6wHmHjiISjiIAYZAupwu+brJ+QlWmeX7hVHDlUYBcCJIRZEFZQzFq7+3tqRaO4sgMitaJaNeyIalGMZZBoVoVYOHKJiLXX5wk7+y3wa8ktKmP/0Lh4qyrQZ2yhQUoUJesvMmdfupZSF/H0W2GJqbTTd4rIoOaM+IzNZKHAAOi2Ekq7OzIf+EpKyMQZvxy9xqGCHntOk4vAnq2fpEFXzXbWLh2rkoi5wtqFOdkSxM029dLbu3CUT7EQF2/L4agWqc3Tybmc68pEmriyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OAVgGrfQPitmpq1mXMjQQshOAt8uvvM/13xR2RCLf9U=; b=Z4VOgHsy7CudkLRn+N0V/iQcocAPAenOaSQOdOGRgiYVYPfPap4+6pC9jhz6DFeStPCENWkORWRZOrBSLybNMXpk1uOdzoYr8RHyOa0HVU2YHqc8BBdCV41rURcJJ4jz2wVBK0sp+yrQOh55LQFZfWzsonGH8xQw1kfSGDcMpw4= Authentication-Results: amd.com; dkim=none (message not signed) header.d=none;amd.com; dmarc=none action=none header.from=amd.com; Received: from DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) by DM5PR1201MB2507.namprd12.prod.outlook.com (2603:10b6:3:e9::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Wed, 25 Nov 2020 03:17:23 +0000 Received: from DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56]) by DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56%6]) with mapi id 15.20.3589.031; Wed, 25 Nov 2020 03:17:23 +0000 From: Luben Tuikov To: Andrey Grodzovsky , =?utf-8?q?Christian_K?= =?utf-8?q?=C3=B6nig?= , Lucas Stach , Alexander Deucher Subject: [PATCH 2/6] gpu/drm: ring_mirror_list --> pending_list Date: Tue, 24 Nov 2020 22:17:04 -0500 Message-Id: <20201125031708.6433-3-luben.tuikov@amd.com> X-Mailer: git-send-email 2.29.2.154.g7f7ebe054a In-Reply-To: <20201125031708.6433-1-luben.tuikov@amd.com> References: <769e72ee-b2d0-d75f-cc83-a85be08e231b@amd.com> <20201125031708.6433-1-luben.tuikov@amd.com> X-Originating-IP: [165.204.55.250] X-ClientProxiedBy: CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) To DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain.amd.com (165.204.55.250) by CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend Transport; Wed, 25 Nov 2020 03:17:22 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 693d69d2-8e6d-47b8-895f-08d890f0a06b X-MS-TrafficTypeDiagnostic: DM5PR1201MB2507: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2657; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eH8+tXDa3Z6sHdCe5lwFRdzCAJP13vwI96mqU2rnC5Frgzi/4ox36I84fCnz3UHMU7CWAwvFcVnbd2lk2rHpGoNQ1NjfMhef1+Q0NibBm7UCgbg2iToOZWUZgQOX3MCbZ+VBzuZ2QH3rGzOAS7AsNvezaU0/nyGTcjIUFqI8fvmtB1DTlxByCmgxuaxWPZKjPYE5Tiq+gocHN9fAeeEBUO2Jbs9FmGfcNyO2rX/rnvfHSbmrLuXL4QGVj+sH+E8vpmiSLtu9AOHZmOTxq83O/amTqabKx7+hJV+3XumeJkt0l3OSjYoGUzGbAMdYIuc+QLwyVuRy8XO/H8zA3mbB3g== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3962.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(7696005)(6486002)(54906003)(478600001)(26005)(52116002)(8936002)(30864003)(83380400001)(8676002)(86362001)(2616005)(6636002)(5660300002)(956004)(1076003)(4326008)(2906002)(44832011)(186003)(66946007)(110136005)(316002)(36756003)(66476007)(16526019)(6666004)(66556008); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: OSprCyztnDKZGQEvgteGHD+1ZuO9GwH1a6gqbrCusdq2HNTBFsn0XwuzswmRh1DxLvys8w6mYMD0FI96anGtuAoyHiMS4gkMGWZyA/fkXyZ+zrTeGyFEETpwomWa0j3LnjVLZwZ1PZH+Q0xPvRMrDhsYXW5Ju4tETYZUtQbD7KH1oIoshv/06/AS90ataoM10JuouDbSTVArOVVBRLRaz33r5/rCCYyWher91bBXQlNL5jR/5uUe2yoDfggGHC5udXa207L4HTuDp9EKH9GFeVYrVbvUw0wj23n3zTEE1Bb5mJsuOj6ppVztp6UlRBnvdB2oqOl9Qg2SHoc+SDXIPf3s1q6xC9g6Og76hRGGBz2b8IWBpXU0IGkxzKHseLeyZJDOqoV9tzDWDH1Tt3s4Gri9lUxZVzwDHQD49SB8CW2pNSZYRSZ7I+5N41vQiuySA4i8UolE4icRdmw80GYUg7bo09DOBucXed1YZ4SrBqvCV0HfK1iNsXBlVjmOanmEtEUmjqfjJlEAR2FqwHHFBsagCkTwhIU+in2JUK3yRz+vSQsM+cBYPSAlxEOWexY+gwMIhB1mt/1EmW2W3oOL/Z0nJxovezNW/KzwdZM35YxlC2b7dNwlrC7O9EKjnzIjBFxLKbsS/9uumEkqkxL2GFKEUA/1K0BEZacCdEkxSYrGGAu0rjBfiMysdYIRuRyWnUG9XomrSp+RdeY08eSVPdd7yyzrHoprB0VjLFnJLl1NkLY86F+7w6QYBcal1jlGs1NjdCh6JhZoTgxdMfhSsSgFpu3AAvlVh1G9CF49RSQvltAb5AZmaStZ/muRXT13yxygfr4d61fvZ2hmSkbw79Sc0TsAj6gNFHeBFSrqp0iFTyGO9BPFiKYVqacDCh1Lz/C0nyTbq2Xq9i6j04f6tA== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 693d69d2-8e6d-47b8-895f-08d890f0a06b X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3962.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 03:17:23.3766 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: pVSsIVT662O+bkGFyQR/oWHtMNAWCaCMfRwADb/8j9ztw4diXOi/g9YSzJh/2Mm5 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB2507 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Emily Deng , Luben Tuikov , amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, steven.price@arm.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rename "ring_mirror_list" to "pending_list", to describe what something is, not what it does, how it's used, or how the hardware implements it. This also abstracts the actual hardware implementation, i.e. how the low-level driver communicates with the device it drives, ring, CAM, etc., shouldn't be exposed to DRM. The pending_list keeps jobs submitted, which are out of our control. Usually this means they are pending execution status in hardware, but the latter definition is a more general (inclusive) definition. Signed-off-by: Luben Tuikov --- drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 34 ++++++++++----------- include/drm/gpu_scheduler.h | 10 +++--- 5 files changed, 27 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c index 8358cae0b5a4..db77a5bdfa45 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c @@ -1427,7 +1427,7 @@ static void amdgpu_ib_preempt_job_recovery(struct drm_gpu_scheduler *sched) struct dma_fence *fence; spin_lock(&sched->job_list_lock); - list_for_each_entry(s_job, &sched->ring_mirror_list, list) { + list_for_each_entry(s_job, &sched->pending_list, list) { fence = sched->ops->run_job(s_job); dma_fence_put(fence); } @@ -1459,7 +1459,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring) no_preempt: spin_lock(&sched->job_list_lock); - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { + list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) { if (dma_fence_is_signaled(&s_job->s_fence->finished)) { /* remove job from ring_mirror_list */ list_del_init(&s_job->list); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 4df6de81cd41..fbae600aa5f9 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -4127,8 +4127,8 @@ bool amdgpu_device_has_job_running(struct amdgpu_device *adev) continue; spin_lock(&ring->sched.job_list_lock); - job = list_first_entry_or_null(&ring->sched.ring_mirror_list, - struct drm_sched_job, list); + job = list_first_entry_or_null(&ring->sched.pending_list, + struct drm_sched_job, list); spin_unlock(&ring->sched.job_list_lock); if (job) return true; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index aca52a46b93d..ff48101bab55 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -271,7 +271,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) } /* Signal all jobs already scheduled to HW */ - list_for_each_entry(s_job, &sched->ring_mirror_list, list) { + list_for_each_entry(s_job, &sched->pending_list, list) { struct drm_sched_fence *s_fence = s_job->s_fence; dma_fence_set_error(&s_fence->finished, -EHWPOISON); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index c52eba407ebd..b694df12aaba 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -198,7 +198,7 @@ EXPORT_SYMBOL(drm_sched_dependency_optimized); static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched) { if (sched->timeout != MAX_SCHEDULE_TIMEOUT && - !list_empty(&sched->ring_mirror_list)) + !list_empty(&sched->pending_list)) schedule_delayed_work(&sched->work_tdr, sched->timeout); } @@ -258,7 +258,7 @@ void drm_sched_resume_timeout(struct drm_gpu_scheduler *sched, { spin_lock(&sched->job_list_lock); - if (list_empty(&sched->ring_mirror_list)) + if (list_empty(&sched->pending_list)) cancel_delayed_work(&sched->work_tdr); else mod_delayed_work(system_wq, &sched->work_tdr, remaining); @@ -272,7 +272,7 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job) struct drm_gpu_scheduler *sched = s_job->sched; spin_lock(&sched->job_list_lock); - list_add_tail(&s_job->list, &sched->ring_mirror_list); + list_add_tail(&s_job->list, &sched->pending_list); drm_sched_start_timeout(sched); spin_unlock(&sched->job_list_lock); } @@ -286,7 +286,7 @@ static void drm_sched_job_timedout(struct work_struct *work) /* Protects against concurrent deletion in drm_sched_get_cleanup_job */ spin_lock(&sched->job_list_lock); - job = list_first_entry_or_null(&sched->ring_mirror_list, + job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list); if (job) { @@ -371,7 +371,7 @@ EXPORT_SYMBOL(drm_sched_increase_karma); * Stop the scheduler and also removes and frees all completed jobs. * Note: bad job will not be freed as it might be used later and so it's * callers responsibility to release it manually if it's not part of the - * mirror list any more. + * pending list any more. * */ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) @@ -392,15 +392,15 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) * Add at the head of the queue to reflect it was the earliest * job extracted. */ - list_add(&bad->list, &sched->ring_mirror_list); + list_add(&bad->list, &sched->pending_list); /* * Iterate the job list from later to earlier one and either deactive - * their HW callbacks or remove them from mirror list if they already + * their HW callbacks or remove them from pending list if they already * signaled. * This iteration is thread safe as sched thread is stopped. */ - list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, + list_for_each_entry_safe_reverse(s_job, tmp, &sched->pending_list, list) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, @@ -408,7 +408,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) atomic_dec(&sched->hw_rq_count); } else { /* - * remove job from ring_mirror_list. + * remove job from pending_list. * Locking here is for concurrent resume timeout */ spin_lock(&sched->job_list_lock); @@ -463,7 +463,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) * so no new jobs are being inserted or removed. Also concurrent * GPU recovers can't run in parallel. */ - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { + list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) { struct dma_fence *fence = s_job->s_fence->parent; atomic_inc(&sched->hw_rq_count); @@ -494,7 +494,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) EXPORT_SYMBOL(drm_sched_start); /** - * drm_sched_resubmit_jobs - helper to relunch job from mirror ring list + * drm_sched_resubmit_jobs - helper to relunch job from pending ring list * * @sched: scheduler instance * @@ -506,7 +506,7 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) bool found_guilty = false; struct dma_fence *fence; - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { + list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) { struct drm_sched_fence *s_fence = s_job->s_fence; if (!found_guilty && atomic_read(&s_job->karma) > sched->hang_limit) { @@ -665,7 +665,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) * * @sched: scheduler instance * - * Returns the next finished job from the mirror list (if there is one) + * Returns the next finished job from the pending list (if there is one) * ready for it to be destroyed. */ static struct drm_sched_job * @@ -675,7 +675,7 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) /* * Don't destroy jobs while the timeout worker is running OR thread - * is being parked and hence assumed to not touch ring_mirror_list + * is being parked and hence assumed to not touch pending_list */ if ((sched->timeout != MAX_SCHEDULE_TIMEOUT && !cancel_delayed_work(&sched->work_tdr)) || @@ -684,11 +684,11 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) spin_lock(&sched->job_list_lock); - job = list_first_entry_or_null(&sched->ring_mirror_list, + job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list); if (job && dma_fence_is_signaled(&job->s_fence->finished)) { - /* remove job from ring_mirror_list */ + /* remove job from pending_list */ list_del_init(&job->list); } else { job = NULL; @@ -858,7 +858,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, init_waitqueue_head(&sched->wake_up_worker); init_waitqueue_head(&sched->job_scheduled); - INIT_LIST_HEAD(&sched->ring_mirror_list); + INIT_LIST_HEAD(&sched->pending_list); spin_lock_init(&sched->job_list_lock); atomic_set(&sched->hw_rq_count, 0); INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 3add0072bd37..2e0c368e19f6 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -174,7 +174,7 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * @sched: the scheduler instance on which this job is scheduled. * @s_fence: contains the fences for the scheduling of job. * @finish_cb: the callback for the finished fence. - * @node: used to append this struct to the @drm_gpu_scheduler.ring_mirror_list. + * @node: used to append this struct to the @drm_gpu_scheduler.pending_list. * @id: a unique id assigned to each job scheduled on the scheduler. * @karma: increment on every hang caused by this job. If this exceeds the hang * limit of the scheduler then the job is marked guilty and will not @@ -203,7 +203,7 @@ struct drm_sched_job { static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, int threshold) { - return (s_job && atomic_inc_return(&s_job->karma) > threshold); + return s_job && atomic_inc_return(&s_job->karma) > threshold; } /** @@ -260,8 +260,8 @@ struct drm_sched_backend_ops { * @work_tdr: schedules a delayed call to @drm_sched_job_timedout after the * timeout interval is over. * @thread: the kthread on which the scheduler which run. - * @ring_mirror_list: the list of jobs which are currently in the job queue. - * @job_list_lock: lock to protect the ring_mirror_list. + * @pending_list: the list of jobs which are currently in the job queue. + * @job_list_lock: lock to protect the pending_list. * @hang_limit: once the hangs by a job crosses this limit then it is marked * guilty and it will be considered for scheduling further. * @score: score to help loadbalancer pick a idle sched @@ -282,7 +282,7 @@ struct drm_gpu_scheduler { atomic64_t job_id_count; struct delayed_work work_tdr; struct task_struct *thread; - struct list_head ring_mirror_list; + struct list_head pending_list; spinlock_t job_list_lock; int hang_limit; atomic_t score; From patchwork Wed Nov 25 03:17:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luben Tuikov X-Patchwork-Id: 11930249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8174FC2D0E4 for ; Wed, 25 Nov 2020 03:17:46 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0DFAC20782 for ; Wed, 25 Nov 2020 03:17:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="glYqNvls" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0DFAC20782 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 12DBA6E821; Wed, 25 Nov 2020 03:17:43 +0000 (UTC) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9CCAB6E7F5; Wed, 25 Nov 2020 03:17:25 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cH4yKgiPrD/nCBCCZwGVlBxQIf/uU6rxgGGdvZPpnPzi2aoaPziZlzefUGoANTPEXr/pv/tJlCxTq2aOdctx2oq5cYcHCNYxRkqsbLKsTpXKmiWzZFHaKatkehMmWAuQRmJqG99+GgJIAmQ7KdbloC7a40uin4Bxc3L6/TcabwC66yR3GQ/4baTJbmPsANH3WCWwqCPlY38QR1PELFw3f5OUouKXndq+/d7L/lvTgnjEu9keYO8++eZSNKdUoTotC1GGScGBQzVYyyJwl1iJDDtwfOeVAhmM6a8sKMZTUOKPCh7P3WN61IUmdhfusR82WYmIYzgkpPYS7HoEFxSSSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CdeKdKNlTD4dXgnHydLoGATCWS06729/VdPxAYfuwZY=; b=Akkqlx57MaJZb5Q1vaaDHIMtiiv9OZYUljs92gaWJqPWJ/Dglebn0q3OJPfczNoVa/rP6zD1R+6+yxVq0UZ602DM5GlwaHu165cvQEIeUMdgu5zCErR+koxTaS9QvXSEyfTN49L2MWXS0hTzxmEFmioP9uw0MvA0VjtSVeHlu/Y8LuSc8Y9LpVJmIX7iiPEqASTYSP8Qx883J6Olq4WkDxXDpXr+CfEsJ1YgyRlu8211MceTIMSaAnqXXzyT+M+wjAYwz7UVHeBM2qRXykWyKO1+cMtNXibcC5pWQwAVW0J1o3Da58YROqkh87m3rNN5//06aTkZsVOOa5TTy3rkfA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CdeKdKNlTD4dXgnHydLoGATCWS06729/VdPxAYfuwZY=; b=glYqNvlsSvpZyI6Zyf00txGKFV5dk5zBgSB0o6rGMVKoifHZe4IGFh7bVDysMch+QHEQ9VrFNId6IG9WIYN66d9eH19cclIji1Q8Z1y+FMPtMp8nmmoO0Apcu0J2YBEpleWNRLKJEkyrhE7f/NjLoO1n/6pyTiHPuq4lHTDV5f8= Authentication-Results: amd.com; dkim=none (message not signed) header.d=none;amd.com; dmarc=none action=none header.from=amd.com; Received: from DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) by DM5PR1201MB2507.namprd12.prod.outlook.com (2603:10b6:3:e9::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Wed, 25 Nov 2020 03:17:24 +0000 Received: from DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56]) by DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56%6]) with mapi id 15.20.3589.031; Wed, 25 Nov 2020 03:17:24 +0000 From: Luben Tuikov To: Andrey Grodzovsky , =?utf-8?q?Christian_K?= =?utf-8?q?=C3=B6nig?= , Lucas Stach , Alexander Deucher Subject: [PATCH 3/6] drm/scheduler: Job timeout handler returns status Date: Tue, 24 Nov 2020 22:17:05 -0500 Message-Id: <20201125031708.6433-4-luben.tuikov@amd.com> X-Mailer: git-send-email 2.29.2.154.g7f7ebe054a In-Reply-To: <20201125031708.6433-1-luben.tuikov@amd.com> References: <769e72ee-b2d0-d75f-cc83-a85be08e231b@amd.com> <20201125031708.6433-1-luben.tuikov@amd.com> X-Originating-IP: [165.204.55.250] X-ClientProxiedBy: CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) To DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain.amd.com (165.204.55.250) by CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend Transport; Wed, 25 Nov 2020 03:17:23 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 31d15c60-29a0-4791-ce21-08d890f0a0e9 X-MS-TrafficTypeDiagnostic: DM5PR1201MB2507: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1107; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YzE2dp6JiLgV02bn0uX6vOJ+LzzZYU28kJlIG4CXsqYZ0lasKvs9sbAoE15zfxLpi95xXs1p+gJot0MkOCmMwITR3sZ/eAiT+ifxAP9nS+PxeS/cJhOldnHQ8dIPi2lR2EPomXpyiitbohZQrsvJdWJ/uTbIwoQ1pd2CbCvxjKWO/6w846V8MthrdD+Wr09UYhI5Watb4IyIn+up4mdXv0XzErvJ2a4eKtLXX/ywBRw/8xx4s4Sa4GrOSEQ4avNNywfZLyqlCaafQ7DwiR8eLgr5w3wQr/oqzUZsthFrh6tzpSh3L1cKHHtgYgKXLqC5hnddT+I+HhbFN8vUBz1SFw== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3962.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(7696005)(6486002)(54906003)(478600001)(26005)(52116002)(8936002)(83380400001)(8676002)(86362001)(2616005)(6636002)(5660300002)(956004)(1076003)(4326008)(2906002)(44832011)(186003)(66946007)(110136005)(316002)(36756003)(66476007)(16526019)(6666004)(66556008); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: urGw28RAHVM8hkDJqMTTprLXcEPRcs8jz3n1Xsn1/UwEO8Hxe2s5cOcRBrjRtxeqyhwnR93D4wpz3lu9dzegJ96FRC/UHwXYQkai5nn7ILm1nfOKuLgyLIIGFPcjvuU+l7zp11tPw3huL6kHPQI6Da2/+xneiiomdYXaEuNbT08Q0iXkdu0f06GdhPY5YTZFYuNCYMPFbHotKmB5/92AsSXzpRIsrFIFF6aQ+BMG73x9yeQYIHxPzYcp9UUtKq+KN2ShphYnqY87sbyJ/IARU1e1Sg6261GEW1idMcw30SOW5lEET+9vmby6w/eUD/Ph00p4L/S/3fARc3uK1OvWJLIYYjP1U2bjM+bRmr7hw1C+boFZJOe0H+Fk7Kh59RMniIm8zOdZHXqLBOB7o252jBsQxFqxfne1v5Eb1Am4CYE027hVlM0F+pJNVywlsLcBXLCuuHuO5DKF2Wy7l1c0oXsgFlf3T5eTSFF1TW30Ljyq93CRea32OzNoOtE3qHqKe8ABGqKA0TNDPVH73eE5acrXfJuqQkdm8oRFpVwQN/CLxBBiXFMOAdQ3WpAMJ/f0Tg/rpHVdFDm4U/khwA0BYbHro14mvJtIg/pmYU1/BYHtKWSUrujIbP8NJikM9+ib2In+24KiNANXyT7MveZx3Cy2AmBTLUJrdu8JuyAurOTd72e0PqzwFm/VdgCx5EGDbPQ3nNhUcwzBCMMA3c/ZGLDGr/xmoerjiQgKuOCQc2DzO4F3cPeBQ+k3UYp8hBadU+U/g8f4g0gAHo7jX0q+3dV6Uv0MA+0j92oH1mysRDuGqxFS5ygphMOZbBffLUKW+fNafoR6httfRWN4z/EV97ar74DWjZfjDRYhvXx7HUcOZ9jC3A2jJdU0wzymz8+VNydydbQHpV1HZpP5y7G9eA== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 31d15c60-29a0-4791-ce21-08d890f0a0e9 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3962.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 03:17:24.0522 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: eYClcIWeA36mMrAdDkGbkXcPE4IZQnb82SHB28asrotCSp44t7Z/Jv+BA3/fLnRd X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB2507 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Emily Deng , Luben Tuikov , amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, steven.price@arm.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The job timeout handler now returns status indicating back to the DRM layer whether the job was successfully cancelled or whether more time should be given to the job to complete. Signed-off-by: Luben Tuikov Reported-by: kernel test robot --- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 6 ++++-- include/drm/gpu_scheduler.h | 13 ++++++++++--- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index ff48101bab55..81b73790ecc6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -28,7 +28,7 @@ #include "amdgpu.h" #include "amdgpu_trace.h" -static void amdgpu_job_timedout(struct drm_sched_job *s_job) +static int amdgpu_job_timedout(struct drm_sched_job *s_job) { struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched); struct amdgpu_job *job = to_amdgpu_job(s_job); @@ -41,7 +41,7 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job) amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) { DRM_ERROR("ring %s timeout, but soft recovered\n", s_job->sched->name); - return; + return 0; } amdgpu_vm_get_task_info(ring->adev, job->pasid, &ti); @@ -53,10 +53,12 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job) if (amdgpu_device_should_recover_gpu(ring->adev)) { amdgpu_device_gpu_recover(ring->adev, job); + return 0; } else { drm_sched_suspend_timeout(&ring->sched); if (amdgpu_sriov_vf(adev)) adev->virt.tdr_debug = true; + return 1; } } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 2e0c368e19f6..61f7121e1c19 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -230,10 +230,17 @@ struct drm_sched_backend_ops { struct dma_fence *(*run_job)(struct drm_sched_job *sched_job); /** - * @timedout_job: Called when a job has taken too long to execute, - * to trigger GPU recovery. + * @timedout_job: Called when a job has taken too long to execute, + * to trigger GPU recovery. + * + * Return 0, if the job has been aborted successfully and will + * never be heard of from the device. Return non-zero if the + * job wasn't able to be aborted, i.e. if more time should be + * given to this job. The result is not "bool" as this + * function is not a predicate, although its result may seem + * as one. */ - void (*timedout_job)(struct drm_sched_job *sched_job); + int (*timedout_job)(struct drm_sched_job *sched_job); /** * @free_job: Called once the job's finished fence has been signaled From patchwork Wed Nov 25 03:17:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Luben Tuikov X-Patchwork-Id: 11930243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2B96C64E75 for ; Wed, 25 Nov 2020 03:17:41 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8339C20782 for ; Wed, 25 Nov 2020 03:17:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="kgmn+y8e" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8339C20782 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 314C06E81B; Wed, 25 Nov 2020 03:17:34 +0000 (UTC) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by gabe.freedesktop.org (Postfix) with ESMTPS id D41D16E7EC; Wed, 25 Nov 2020 03:17:25 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RkAPHxmXI/MgSMCShRBptL8y7dofJ3VT0WYTzGNdrfr/wrMTAggwg71xnCSOWuvyG10k7W7msTxyd4rcpL1/8o7EKAORYh4Egiq0BhWk9Rda6b/DQq1Pl0hPR7RF7hVBSIDiCGedtIDndeB7i5jbIFNQa1VDIqi3QMFfWTkcsj+QgjRrbjonqiMY7CubVKVX/1w0HkFUQJAJBR4gn8bMRKU3sq0VDexu70Cz2mvRwSI0bkKrYzhrIxqTbSs++6pE75/c3WXbp/1qheEuNS5PkRcaM9Q1CjQ67QrIqWZQZE+1VZEoHXWMYpJ8Yv6ddH1mo/IAVMaHx8jLj6N/0V0VTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hK9B1RgW0c3dITZaNhimDGn7iGj1C9W7UwE96V4f4Po=; b=mSEJ/mC07RU7VVcShE/kkny1kuH58dUIiSg5X2+uEHnhZu+WMg3gFLGzOMai4cXZZ48oCvcFeZgd02j+OxWSrGMcehv8e49YROeFApShWNRC44g/4vQaGUjRZClwRuD9aiCF788KEw86pzCCay2ysegdcbTiXCkdnibgpxZo8hhCGoogLhMkNgRfB+9PTUwruoEPOu+G1WVLa8Rdn8fGTl47cKkJzkwSFzqQJH+gQGPqEU6T18e9eBMceSg2EtWFXu1iPZXkphFFf5Mm1n0mg2/b07MqNtjTj0XMXR/s8xYLsrKvX2PHm79yDKbmXRcuC538Cn36pkP+xOOHhC92Hw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hK9B1RgW0c3dITZaNhimDGn7iGj1C9W7UwE96V4f4Po=; b=kgmn+y8ehUctPR4f5dd1iEHobhD/STStZF03EY4eO/LDhUnOWo6GesBkO9vsZXINP4X9oyIVDON5xeldxXtltVIdQImj9KM+z8QVjC9zyyEzA+t9nMWZ+g4qgcNEfhlSBj3FFEQfbuWJozUKKI/XNIG0L4EEKuWqitHVnOdSUyo= Authentication-Results: amd.com; dkim=none (message not signed) header.d=none;amd.com; dmarc=none action=none header.from=amd.com; Received: from DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) by DM5PR1201MB2507.namprd12.prod.outlook.com (2603:10b6:3:e9::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Wed, 25 Nov 2020 03:17:24 +0000 Received: from DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56]) by DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56%6]) with mapi id 15.20.3589.031; Wed, 25 Nov 2020 03:17:24 +0000 From: Luben Tuikov To: Andrey Grodzovsky , =?utf-8?q?Christian_K?= =?utf-8?q?=C3=B6nig?= , Lucas Stach , Alexander Deucher Subject: [PATCH 4/6] drm/scheduler: Essentialize the job done callback Date: Tue, 24 Nov 2020 22:17:06 -0500 Message-Id: <20201125031708.6433-5-luben.tuikov@amd.com> X-Mailer: git-send-email 2.29.2.154.g7f7ebe054a In-Reply-To: <20201125031708.6433-1-luben.tuikov@amd.com> References: <769e72ee-b2d0-d75f-cc83-a85be08e231b@amd.com> <20201125031708.6433-1-luben.tuikov@amd.com> X-Originating-IP: [165.204.55.250] X-ClientProxiedBy: CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) To DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain.amd.com (165.204.55.250) by CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend Transport; Wed, 25 Nov 2020 03:17:24 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: fe9b4bd3-04b1-4de6-ea0a-08d890f0a145 X-MS-TrafficTypeDiagnostic: DM5PR1201MB2507: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:962; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kE/EoMTFwUXcaBdOboeJ/KSj1xXjKGCz/rJe0mynVT//WeUi17Od4wJGX1kGDNRiTHUpDsZJ0e3JdbptlpK3pOzBY6/sGOqHFqE1skfE9soCSdkEqBOZykaIOcnbdvZLjUmekzT6UI8xA191P1k9/ekhFgaYAxHHkfic4BBax8WiJ6UMmD7XVP6FcsopiC9LhIIRmKFiatqSQRxOchyhZ6UhiTO139/E/1X7W6IlvIUtIQhLPW+N6ldVXXpboEuRWIcodkq/nR31PQLikgZ2spMfYtsiyuLshQ/7qWRYcEqHUEGVbPHfvusMVG4NGDF3BIk57Og5PDJANXBQhD2GZA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3962.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(7696005)(6486002)(54906003)(478600001)(26005)(52116002)(8936002)(83380400001)(8676002)(86362001)(2616005)(6636002)(5660300002)(956004)(1076003)(4326008)(2906002)(44832011)(186003)(66946007)(110136005)(316002)(36756003)(66476007)(16526019)(6666004)(66556008); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: bTbhno35keX06D46MF1G5qZcBxXi+V7I/9B/06VGYdNAJfRx3may4tCm/VWCAZ0ZjHfOMz6OIXjIk/JFUd1SRgIFinqdB4S0mVH4oy5tw1sPnVSjVEY5ICg311/R4Dl5zrxunEWLIOF5RkNh/zi7udoHUibLi8p48+sl5q56MF1nSqFmuM6ctm1xvlcPpOcRIhcE0T/GlbLD0ZDwW7wjeUMcSsacPc+3B8On6mgD6L7s7V1xNHfF3BaPiQ2GYWR9LPb2YXoGAsHuh1Agg0Cv+evY4KPEHpH9SjGO5qNQRdX1eZqUQ9C2vS9vBHoZYGU0AAVpfI/FtaA3N7QQxQsJizDKIoSYs97sAmV22ThE8nZ9oNUnFg4VnC2EyBcgt7Wa24rMMGrdrHh+/AJkWUfcOq7TYIxrI97jQmW2N5vJLk2+ObP/ErqX78Dx4OGfkb2aWU3QAOM+MRUhoPMGPmv9euZ1ZwL95z5UGKoeEkgC3S+ddzAyRo4KzIuJ1cd7OJZ4Y7ZtOYkMulAf8mdB/UgBKKbGe4lzrIE8RgZl3ldKk/TaFY9xW6P1UD3Dh799w73lFhI6tyXvBoDCHWkXWtJ2CdscaEY1AJKwdqYeyzr43ZLnTgj8py2xabydirmziVxnj2np57K3uZuvqTEH6nxuWY84d9irGxzDdNqKAxlQRfDDS8cnQO+L0IZM00vHWjAMUxitI4z7VxR/j0AUS0lJ8VF/6D24NSsXV4jWrZE6kXy8BH5Gwgf2Q0iYIZLXX8Op0bTqcjtYMZ1ektZpeH//67U+YlL2LMfxty3QPjVstHAfax5JqZx9cO1OKs92PPsgLsfj5D+KcZqi85nNuRit/GapenCVlsjxGqxkQclaeQH5WV+jTUP6JxB7k1qpPLh0kx5SVZxMEa7pTVvTlMcIrw== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: fe9b4bd3-04b1-4de6-ea0a-08d890f0a145 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3962.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 03:17:24.6738 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: gCrxRhoIQNAlpjMlgqDIsDvxzQEk4ZoRJue4n2ESjj/0O4yaA0h1KGAXDHoPx3PS X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB2507 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Emily Deng , Luben Tuikov , amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, steven.price@arm.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The job done callback is called from various places, in two ways: in job done role, and as a fence callback role. Essentialize the callback to an atom function to just complete the job, and into a second function as a prototype of fence callback which calls to complete the job. This is used in latter patches by the completion code. Signed-off-by: Luben Tuikov Reviewed-by: Christian König --- drivers/gpu/drm/scheduler/sched_main.c | 73 ++++++++++++++------------ 1 file changed, 40 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index b694df12aaba..3eb7618a627d 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -60,8 +60,6 @@ #define to_drm_sched_job(sched_job) \ container_of((sched_job), struct drm_sched_job, queue_node) -static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb); - /** * drm_sched_rq_init - initialize a given run queue struct * @@ -162,6 +160,40 @@ drm_sched_rq_select_entity(struct drm_sched_rq *rq) return NULL; } +/** + * drm_sched_job_done - complete a job + * @s_job: pointer to the job which is done + * + * Finish the job's fence and wake up the worker thread. + */ +static void drm_sched_job_done(struct drm_sched_job *s_job) +{ + struct drm_sched_fence *s_fence = s_job->s_fence; + struct drm_gpu_scheduler *sched = s_fence->sched; + + atomic_dec(&sched->hw_rq_count); + atomic_dec(&sched->score); + + trace_drm_sched_process_job(s_fence); + + dma_fence_get(&s_fence->finished); + drm_sched_fence_finished(s_fence); + dma_fence_put(&s_fence->finished); + wake_up_interruptible(&sched->wake_up_worker); +} + +/** + * drm_sched_job_done_cb - the callback for a done job + * @f: fence + * @cb: fence callbacks + */ +static void drm_sched_job_done_cb(struct dma_fence *f, struct dma_fence_cb *cb) +{ + struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); + + drm_sched_job_done(s_job); +} + /** * drm_sched_dependency_optimized * @@ -473,14 +505,14 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) if (fence) { r = dma_fence_add_callback(fence, &s_job->cb, - drm_sched_process_job); + drm_sched_job_done_cb); if (r == -ENOENT) - drm_sched_process_job(fence, &s_job->cb); + drm_sched_job_done(s_job); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); } else - drm_sched_process_job(NULL, &s_job->cb); + drm_sched_job_done(s_job); } if (full_recovery) { @@ -635,31 +667,6 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) return entity; } -/** - * drm_sched_process_job - process a job - * - * @f: fence - * @cb: fence callbacks - * - * Called after job has finished execution. - */ -static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) -{ - struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); - struct drm_sched_fence *s_fence = s_job->s_fence; - struct drm_gpu_scheduler *sched = s_fence->sched; - - atomic_dec(&sched->hw_rq_count); - atomic_dec(&sched->score); - - trace_drm_sched_process_job(s_fence); - - dma_fence_get(&s_fence->finished); - drm_sched_fence_finished(s_fence); - dma_fence_put(&s_fence->finished); - wake_up_interruptible(&sched->wake_up_worker); -} - /** * drm_sched_get_cleanup_job - fetch the next finished job to be destroyed * @@ -809,9 +816,9 @@ static int drm_sched_main(void *param) if (!IS_ERR_OR_NULL(fence)) { s_fence->parent = dma_fence_get(fence); r = dma_fence_add_callback(fence, &sched_job->cb, - drm_sched_process_job); + drm_sched_job_done_cb); if (r == -ENOENT) - drm_sched_process_job(fence, &sched_job->cb); + drm_sched_job_done(sched_job); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); @@ -820,7 +827,7 @@ static int drm_sched_main(void *param) if (IS_ERR(fence)) dma_fence_set_error(&s_fence->finished, PTR_ERR(fence)); - drm_sched_process_job(NULL, &sched_job->cb); + drm_sched_job_done(sched_job); } wake_up(&sched->job_scheduled); From patchwork Wed Nov 25 03:17:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luben Tuikov X-Patchwork-Id: 11930247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E77AAC6379D for ; Wed, 25 Nov 2020 03:17:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8DD8320782 for ; Wed, 25 Nov 2020 03:17:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="Mw1a4Zfv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8DD8320782 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6E0306E824; Wed, 25 Nov 2020 03:17:34 +0000 (UTC) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2064.outbound.protection.outlook.com [40.107.236.64]) by gabe.freedesktop.org (Postfix) with ESMTPS id EE8D16E804; Wed, 25 Nov 2020 03:17:26 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LOOOoWZuGwVgs4kTgsUu5P2q7w9aqxirPuIUj47p3rFHywhlHmkE1als+apX1iniSVrTHgoCxBRwClKg15MGpH5sMzOItVlL43l8fNbtD0s4pVzIH4AO6mBVEEJiAcHO9JXZG+13WfW7KPlOmK/K4VqL+7fHM5bcXUELxGzZd3kyOKCFROgEawK0WYTRqmND2r5WikatNAqyikBtbdVZqWvwpDzO/sgdei0MuRzgdb0KJHTgXR3P/acsonv2WeLTaBZ75MWrP1c/DxB2B2ODDJYpB/kE7VmfSxNyobqEJLxhLjJBUoqVbGdArjYXPEML0HIh3LBox8Cpcaa40dkl8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=irDAko7tiw2V76O5X4hq1K7zJmz191u0PyZWj3aSNUo=; b=MAO4UI4ceImN+1VdAqIGqZK3PrWQGY4CsrNFGtOHyYvUQu795ntfuiYRqjyYGGw+mPuTsP/G2vgj42AH8CAXW/ZGMxHRiPfQdEf/6cFeVyLaOF1e68kip1CYeGcit2fVbJzBWYFz8ecneYugwmlvGHcJk1MbHndsFeuOQBTfGaUAJMyHLzt7JnofS2mFbUnfD1Qsthsdef4jlhdXp+qohIKaHEpUaId1PnLHm5b+TZJXP72VHv8cYgd7fiYwPgOC0upDFnO+TPCmK1GNOqFWuHpnb+wDveXi7HMWiy5DdCXrJQqSkcHZa8FRtcO5NOIAq2y7eByduGQaDW6JoDWLCQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=irDAko7tiw2V76O5X4hq1K7zJmz191u0PyZWj3aSNUo=; b=Mw1a4Zfvoh0cxPFAKIKNOVM06P0ZDYzaGPqzmG291DOC6OS7ozz4Cw4+5DMN0JKgnSmLPSo3UJU44WkCg6u13ka7XOmKK6ZLyoaV7kxu/cvhfF01w5cnPAkZ0A7BIXo9a/MgE0EaRnJcFQ5aMaw31fJfD9hwUH5KM+l9BM0qRGw= Authentication-Results: amd.com; dkim=none (message not signed) header.d=none;amd.com; dmarc=none action=none header.from=amd.com; Received: from DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) by DM5PR1201MB2507.namprd12.prod.outlook.com (2603:10b6:3:e9::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Wed, 25 Nov 2020 03:17:25 +0000 Received: from DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56]) by DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56%6]) with mapi id 15.20.3589.031; Wed, 25 Nov 2020 03:17:25 +0000 From: Luben Tuikov To: Andrey Grodzovsky , =?utf-8?q?Christian_K?= =?utf-8?q?=C3=B6nig?= , Lucas Stach , Alexander Deucher Subject: [PATCH 5/6] drm/amdgpu: Don't hardcode thread name length Date: Tue, 24 Nov 2020 22:17:07 -0500 Message-Id: <20201125031708.6433-6-luben.tuikov@amd.com> X-Mailer: git-send-email 2.29.2.154.g7f7ebe054a In-Reply-To: <20201125031708.6433-1-luben.tuikov@amd.com> References: <769e72ee-b2d0-d75f-cc83-a85be08e231b@amd.com> <20201125031708.6433-1-luben.tuikov@amd.com> X-Originating-IP: [165.204.55.250] X-ClientProxiedBy: CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) To DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain.amd.com (165.204.55.250) by CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend Transport; Wed, 25 Nov 2020 03:17:24 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 29f24684-d1af-4a20-8c58-08d890f0a1a3 X-MS-TrafficTypeDiagnostic: DM5PR1201MB2507: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:820; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9TA/7WeSXPtz+pqVEujgbL2Aidw0KIuHLZnYaf6A3jan47QpGZxFx7B6DuV3MBXrWTSsv3kwL/h38woEEOdJANQZ8qnb325DG2g16sEdeZ83T8Qf7A1AxAjbYJx1MybnyBqgs4/IAQZgq1LiMe3Tl0ZEbkIzhyEuHWfOP1P/roDrE+IPrJOon7aRI1Pj4+O4wALn41zJcQzGK3Swdm9m0ISZ45cFOkJ49d8b+cCPNC54tiiRI9pz3TSBzc5RrEvF42VLSLu85kYpNJB1zuBX0FMeiftdgRrTZnR31KMBbhNPA/gOxBDiiKUrBQWo/n20Xkte21WRre+t/OSPRH8X8w== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3962.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(7696005)(6486002)(54906003)(478600001)(26005)(52116002)(8936002)(83380400001)(8676002)(86362001)(2616005)(6636002)(5660300002)(956004)(1076003)(4326008)(2906002)(44832011)(186003)(66946007)(110136005)(316002)(36756003)(66476007)(16526019)(6666004)(66556008); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: ffZ48DDh5uYz4RjyJmXv7GwN/mugrIzzubPXK5w6DYVgH5+MPMYa6Y3tYHlsp6O6GR1p9ymqrH9WL1hcD+Lxs5zw8cB0zXfJmg5wSC9JU7/lU/C7ouXYDHIgeNTRrVbjJtsD6mU1YljecQ3l1+XBIG8R8KDxtbeF+vph9yAC0/pvSj6V80oXkFQSyjI2n7JGVrpmPFZ3PkwofsP+caWu5RxfEKrCV2ClQ5lVYQxLAvbKk1ecXVdn/W/MAyFK9NNI6NtB4wzsibFt+5vdPgAWkv3rUeTbWY0pmEwzNQ2qgFYq0Jc29AQXuKj7HGTRtO0+WVM6VYRY3iLkjWWOYUfJBHtsld9toqn2EUtvtxAchwvrNfs+rHFEayJF4wuhPypkG7TS4ffIcJf4PqDlE1wauaWZTq1G3sYxrEZXOkt0YqPXXgDE3uM1ipTTNwneRhgDrpN9uW40yHo8ILotqErDC+O4r/lIucneoaSo3IuJ1XB9uoVrmz7sNl7zn350Ktz9kUi5Z85zK9Hb4nL5WgtTIGyFie727NVNk2M1ElTChvW9nJXDaOycnlaLH9PoXCD0luhUgKtMAgkSRsyhKnYCUfBbSzPM5Bx9J7cP9OxO+kyl1p45qZuGgA6qLJxKRJkIytpIRzKY0poQUCdVy/z2DJlIL0EeV7mKjFVFjvPM08DM1wAume4ci+QxIbT+Izgf3xpJFly9mqqKXPtZjTSdSycwUZvydADxXYEq2+Hl/MBVFdB2I5TI5QZHJATh/DYTVpn+WvxpOp2JY3PQGeZ3Jf+uqgmNks5Guvj/8CcHy9gu/REnMkeqVTBDNgtONALQI5EXxWQzPXVytNSZnXxf3xvWZqdC3Cc9M4hkG8dFE+sKoxyNChk+vvaS7q9/xqGcCyvekbWo+F4tc9jJgQvTSg== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 29f24684-d1af-4a20-8c58-08d890f0a1a3 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3962.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 03:17:25.3404 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: nyK/P3uSIDUI8dBx27cLc25Fn1z20HnE8ywVhC+FVUAcdS417DRlu3fS05B/8YdG X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB2507 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Emily Deng , Luben Tuikov , amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, steven.price@arm.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Introduce a macro DRM_THREAD_NAME_LEN and use that to define ring name size, instead of hardcoding it to 16. Signed-off-by: Luben Tuikov --- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 2 +- include/drm/gpu_scheduler.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h index 7112137689db..bbd46c6dec65 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h @@ -230,7 +230,7 @@ struct amdgpu_ring { unsigned wptr_offs; unsigned fence_offs; uint64_t current_ctx; - char name[16]; + char name[DRM_THREAD_NAME_LEN]; u32 trail_seq; unsigned trail_fence_offs; u64 trail_fence_gpu_addr; diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 61f7121e1c19..3a5686c3b5e9 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -30,6 +30,8 @@ #define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000) +#define DRM_THREAD_NAME_LEN TASK_COMM_LEN + struct drm_gpu_scheduler; struct drm_sched_rq; From patchwork Wed Nov 25 03:17:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luben Tuikov X-Patchwork-Id: 11930245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FD7AC56202 for ; Wed, 25 Nov 2020 03:17:43 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 168A220782 for ; Wed, 25 Nov 2020 03:17:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="Yr/+4APx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 168A220782 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 310226E7F1; Wed, 25 Nov 2020 03:17:34 +0000 (UTC) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2075.outbound.protection.outlook.com [40.107.236.75]) by gabe.freedesktop.org (Postfix) with ESMTPS id 094D66E819; Wed, 25 Nov 2020 03:17:28 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AoA7DAa0p1SjvjW7tojNHmlV7mnecSNI1G/RW3N1lNI75A8uJvV8jP1Ucq+orNMXFswKkgBDmSlSxkNHac3Yw5yepRBYr1G1WW0WV3Y/nqgXm9iWg7wQAtVMnFOZRslkTl84tQ5+AhLOJ0H58Til79URcV1q9M0VeDwQjFMrtrbHjPdNhU/fr47beDtposEfGlpqQ6YSSCzhtpAtbHETTB3X9EPM6lGs+nk/gsDkBRroYgSShLxNn7Xx5gyTLg/w470N3XG0kLKF+mIxCc3TNQxpJU5NGsthRkxT2/fUafCtQkkRHKGzmf0S2T85t+7oh6ekD9ZijzZo7bh8dOn5fQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FJZVaJvCAeO9P2IRRtQEfR9kE80YiiNZekWRrrpqqY4=; b=i5fjaU+17CFLJmMDPDXK51uGvjnYDRjzlL/GZOaPdHrzrc1OWBR2pDvIU0jXLAUPfk5lgD/vqiBVXk406U6VMgvV7RP+EY7+lSXcXsl37ryVJJz6WmH/FTipNBQD/ntdHq2KeUox9EyhdM/Kpnvbv3kY4XcX4zerE4tnizrcEFMg+YKh/v5Nr2aoh3udBF7jaB6GpcbGukeoV0s3rG3QWpu9nQhsJ1oQSnTiQdlLy0w6K1ZnCt2NUWX/mAmduwitMqmEnrKcJvsdWgsjuEWjlOELqDyjEYlbRSFWBfpN+YsH42Yab6fhR4y1SwqUqhn/PKu9zNtYnJ6yi8fbghnMZw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FJZVaJvCAeO9P2IRRtQEfR9kE80YiiNZekWRrrpqqY4=; b=Yr/+4APx5eIsAfm0j0W2F9dQlJR1XfLtEnYDISCbcrgG0aVIiVwAnr95GtUTBWNuT/0+T1IFCz46+5NbNbrn6z/+1tPjZ+HKBVbNQGIkAuumjleV2Lyk1TwC++zKifuc7tf+H+8EDxnpV8Zpm0G4n7Cdri53rnFCx8ytIXo27QU= Authentication-Results: amd.com; dkim=none (message not signed) header.d=none;amd.com; dmarc=none action=none header.from=amd.com; Received: from DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) by DM5PR1201MB2507.namprd12.prod.outlook.com (2603:10b6:3:e9::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Wed, 25 Nov 2020 03:17:26 +0000 Received: from DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56]) by DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56%6]) with mapi id 15.20.3589.031; Wed, 25 Nov 2020 03:17:26 +0000 From: Luben Tuikov To: Andrey Grodzovsky , =?utf-8?q?Christian_K?= =?utf-8?q?=C3=B6nig?= , Lucas Stach , Alexander Deucher Subject: [PATCH 6/6] drm/sched: Make use of a "done" thread Date: Tue, 24 Nov 2020 22:17:08 -0500 Message-Id: <20201125031708.6433-7-luben.tuikov@amd.com> X-Mailer: git-send-email 2.29.2.154.g7f7ebe054a In-Reply-To: <20201125031708.6433-1-luben.tuikov@amd.com> References: <769e72ee-b2d0-d75f-cc83-a85be08e231b@amd.com> <20201125031708.6433-1-luben.tuikov@amd.com> X-Originating-IP: [165.204.55.250] X-ClientProxiedBy: CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) To DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain.amd.com (165.204.55.250) by CH2PR03CA0026.namprd03.prod.outlook.com (2603:10b6:610:59::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend Transport; Wed, 25 Nov 2020 03:17:25 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 3a6d6e37-2e45-4421-8f2b-08d890f0a22d X-MS-TrafficTypeDiagnostic: DM5PR1201MB2507: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:159; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RbVKRFuTYQRcYSlFNRmobB0LP6OP+GErEvKwsLAOz7zgKNSg5ciKUbIVj6ZtRyXZt0LKo+lWwOa1fQAmezqjWqS0CJzkTTOs++ftUCGVGcXAe8VdJQvyHfpQ50niCQJprtBEf73MNJXPRCzL8x0qxcBLevFlxCTUcl8Py5gVLgIuxGp0AhN4Jt+D/+tiaaQAEAZ13xbmpmQdxd99Eo2X3Y+LVcFhtmFJ6FuxehrKO6T8tWSjLZjeShnk7WsQWjU9fEyDt2N60OdX9hI6jOQj8E2F7zpghGCsLnNlgyyj6ZG/V8RpcAK7lvFC9AcgCdsv X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3962.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(7696005)(6486002)(54906003)(478600001)(26005)(52116002)(8936002)(30864003)(83380400001)(8676002)(86362001)(2616005)(6636002)(5660300002)(956004)(1076003)(4326008)(2906002)(44832011)(186003)(66946007)(110136005)(316002)(36756003)(66476007)(16526019)(6666004)(66556008); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: KxjW11ys45qQ0gwJK/ISFbvheFIVMIxJklVHg3VC5x0wwCgZyDb0uwbOyub2AinGeULmVROv8HoyV+IKm6An3uD23m8N+LXaj1uHAoebTEeLhyjn/RvBz0GESBzfITOgh+RO1+RM9kt7c4AWtUVibUwHlIMXbApl8dZsJ4otouTUSZo1SfmUY3xK+vZCSyz6A8YBo63qVCWsG8tNBGscDqUNW3dIRWcbIOo2kthZ9BY4rvl5sjrYBt3xJ0cHjrtKwdZjE4aWD6pOEbQ08NRMgj8IlxLLS2q3RvnhotUfOQEqXEkINy6+kHPXKIe/vm8mRI/cBHZAEw1hhJ7MnefYCamejWSs+osvagKCHflYBPRefZ8O+UPZ5ofXT5SlN1k4BFgoqlAGHWXW754NuI0KjE5/hxOXT8E1qqRpaQvnTYzj8ftTenEbJuYWXqdzESHqPFrMVTsUgag/pk7X7i19/5O+q5isTLvL6hgncYuLqrs7C2q7P3sMRWY4c5Fnt/MRp5eJdH6uOL/yF/y9RO9/EdDP6EIs5TNK4RVapGwQxrc30Ha++yP88BExIRe/qlARwXeHccbZ/X3VTiscjVfrLeUZnkGk8Mzb2v+K+r4mL/bVeG6wys6aWGXa6OaY8WTWupJu15VsSUP0jHbE4wZsdKsN53lPUow0q/4WmqB1Wab614fvrrwasK6gAI0GuBP7irC3wMSE+MEsHxAQQpgHkrUomd+/Ed3+H9nQg7yifTDMLz37D2Er4b9hnOb4S8Q0336Vyn+lpP/h33e/Ru7ncNm+7uN4CPTWBwNNufyupMX9aePqYzGvCEW4eMWXCrbnWlTmur+v4AYO0832e+r2mQ2o5D7aTWh5jTt/6RGjp/0lJ3ftu6r4g4GHH2ob1PlFvJdOwY/5SihytisSB4noqA== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3a6d6e37-2e45-4421-8f2b-08d890f0a22d X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3962.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 03:17:26.2479 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: p19oLu7LHrEanmssGl8/2iVae7zYSz7mpZQht/z+pOyxTtC9hrXMyv1wTiXAe+ac X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB2507 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Emily Deng , Luben Tuikov , amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, steven.price@arm.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add a "done" list to which all completed jobs are added to be freed. The drm_sched_job_done() callback is the producer of jobs to this list. Add a "done" thread which consumes from the done list and frees up jobs. Now, the main scheduler thread only pushes jobs to the GPU and the "done" thread frees them up, on the way out of the GPU when they've completed execution. Make use of the status returned by the GPU driver timeout handler to decide whether to leave the job in the pending list, or to send it off to the done list. If a job is done, it is added to the done list and the done thread woken up. If a job needs more time, it is left on the pending list and the timeout timer restarted. Eliminate the polling mechanism of picking out done jobs from the pending list, i.e. eliminate drm_sched_get_cleanup_job(). Now the main scheduler thread only pushes jobs down to the GPU. Various other optimizations to the GPU scheduler and job recovery are possible with this format. Signed-off-by: Luben Tuikov --- drivers/gpu/drm/scheduler/sched_main.c | 173 +++++++++++++------------ include/drm/gpu_scheduler.h | 14 ++ 2 files changed, 101 insertions(+), 86 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 3eb7618a627d..289ae68cd97f 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -164,7 +164,8 @@ drm_sched_rq_select_entity(struct drm_sched_rq *rq) * drm_sched_job_done - complete a job * @s_job: pointer to the job which is done * - * Finish the job's fence and wake up the worker thread. + * Finish the job's fence, move it to the done list, + * and wake up the done thread. */ static void drm_sched_job_done(struct drm_sched_job *s_job) { @@ -179,7 +180,12 @@ static void drm_sched_job_done(struct drm_sched_job *s_job) dma_fence_get(&s_fence->finished); drm_sched_fence_finished(s_fence); dma_fence_put(&s_fence->finished); - wake_up_interruptible(&sched->wake_up_worker); + + spin_lock(&sched->job_list_lock); + list_move(&s_job->list, &sched->done_list); + spin_unlock(&sched->job_list_lock); + + wake_up_interruptible(&sched->done_wait_q); } /** @@ -221,11 +227,10 @@ bool drm_sched_dependency_optimized(struct dma_fence* fence, EXPORT_SYMBOL(drm_sched_dependency_optimized); /** - * drm_sched_start_timeout - start timeout for reset worker - * - * @sched: scheduler instance to start the worker for + * drm_sched_start_timeout - start a timeout timer + * @sched: scheduler instance whose job we're timing * - * Start the timeout for the given scheduler. + * Start a timeout timer for the given scheduler. */ static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched) { @@ -305,8 +310,8 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job) spin_lock(&sched->job_list_lock); list_add_tail(&s_job->list, &sched->pending_list); - drm_sched_start_timeout(sched); spin_unlock(&sched->job_list_lock); + drm_sched_start_timeout(sched); } static void drm_sched_job_timedout(struct work_struct *work) @@ -316,37 +321,30 @@ static void drm_sched_job_timedout(struct work_struct *work) sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work); - /* Protects against concurrent deletion in drm_sched_get_cleanup_job */ spin_lock(&sched->job_list_lock); job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list); + spin_unlock(&sched->job_list_lock); if (job) { - /* - * Remove the bad job so it cannot be freed by concurrent - * drm_sched_cleanup_jobs. It will be reinserted back after sched->thread - * is parked at which point it's safe. - */ - list_del_init(&job->list); - spin_unlock(&sched->job_list_lock); + int res; - job->sched->ops->timedout_job(job); + job->job_status |= DRM_JOB_STATUS_TIMEOUT; + res = job->sched->ops->timedout_job(job); + if (res == 0) { + /* The job is out of the device. + */ + spin_lock(&sched->job_list_lock); + list_move(&job->list, &sched->done_list); + spin_unlock(&sched->job_list_lock); - /* - * Guilty job did complete and hence needs to be manually removed - * See drm_sched_stop doc. - */ - if (sched->free_guilty) { - job->sched->ops->free_job(job); - sched->free_guilty = false; + wake_up_interruptible(&sched->done_wait_q); + } else { + /* The job needs more time. + */ + drm_sched_start_timeout(sched); } - } else { - spin_unlock(&sched->job_list_lock); } - - spin_lock(&sched->job_list_lock); - drm_sched_start_timeout(sched); - spin_unlock(&sched->job_list_lock); } /** @@ -511,15 +509,13 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); - } else + } else { drm_sched_job_done(s_job); + } } - if (full_recovery) { - spin_lock(&sched->job_list_lock); + if (full_recovery) drm_sched_start_timeout(sched); - spin_unlock(&sched->job_list_lock); - } kthread_unpark(sched->thread); } @@ -667,47 +663,6 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) return entity; } -/** - * drm_sched_get_cleanup_job - fetch the next finished job to be destroyed - * - * @sched: scheduler instance - * - * Returns the next finished job from the pending list (if there is one) - * ready for it to be destroyed. - */ -static struct drm_sched_job * -drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) -{ - struct drm_sched_job *job; - - /* - * Don't destroy jobs while the timeout worker is running OR thread - * is being parked and hence assumed to not touch pending_list - */ - if ((sched->timeout != MAX_SCHEDULE_TIMEOUT && - !cancel_delayed_work(&sched->work_tdr)) || - kthread_should_park()) - return NULL; - - spin_lock(&sched->job_list_lock); - - job = list_first_entry_or_null(&sched->pending_list, - struct drm_sched_job, list); - - if (job && dma_fence_is_signaled(&job->s_fence->finished)) { - /* remove job from pending_list */ - list_del_init(&job->list); - } else { - job = NULL; - /* queue timeout for next job */ - drm_sched_start_timeout(sched); - } - - spin_unlock(&sched->job_list_lock); - - return job; -} - /** * drm_sched_pick_best - Get a drm sched from a sched_list with the least load * @sched_list: list of drm_gpu_schedulers @@ -761,6 +716,44 @@ static bool drm_sched_blocked(struct drm_gpu_scheduler *sched) return false; } +/** + * drm_sched_done - free done tasks + * @param: pointer to a scheduler instance + * + * Returns 0. + */ +static int drm_sched_done(void *param) +{ + struct drm_gpu_scheduler *sched = param; + + do { + LIST_HEAD(done_q); + + wait_event_interruptible(sched->done_wait_q, + kthread_should_stop() || + !list_empty(&sched->done_list)); + + spin_lock(&sched->job_list_lock); + list_splice_init(&sched->done_list, &done_q); + spin_unlock(&sched->job_list_lock); + + if (list_empty(&done_q)) + continue; + + while (!list_empty(&done_q)) { + struct drm_sched_job *job; + + job = list_first_entry(&done_q, + struct drm_sched_job, + list); + list_del_init(&job->list); + sched->ops->free_job(job); + } + } while (!kthread_should_stop()); + + return 0; +} + /** * drm_sched_main - main scheduler thread * @@ -770,7 +763,7 @@ static bool drm_sched_blocked(struct drm_gpu_scheduler *sched) */ static int drm_sched_main(void *param) { - struct drm_gpu_scheduler *sched = (struct drm_gpu_scheduler *)param; + struct drm_gpu_scheduler *sched = param; int r; sched_set_fifo_low(current); @@ -780,20 +773,12 @@ static int drm_sched_main(void *param) struct drm_sched_fence *s_fence; struct drm_sched_job *sched_job; struct dma_fence *fence; - struct drm_sched_job *cleanup_job = NULL; wait_event_interruptible(sched->wake_up_worker, - (cleanup_job = drm_sched_get_cleanup_job(sched)) || (!drm_sched_blocked(sched) && (entity = drm_sched_select_entity(sched))) || kthread_should_stop()); - if (cleanup_job) { - sched->ops->free_job(cleanup_job); - /* queue timeout for next job */ - drm_sched_start_timeout(sched); - } - if (!entity) continue; @@ -820,8 +805,7 @@ static int drm_sched_main(void *param) if (r == -ENOENT) drm_sched_job_done(sched_job); else if (r) - DRM_ERROR("fence add callback failed (%d)\n", - r); + DRM_ERROR("fence add callback failed (%d)\n", r); dma_fence_put(fence); } else { if (IS_ERR(fence)) @@ -865,7 +849,9 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, init_waitqueue_head(&sched->wake_up_worker); init_waitqueue_head(&sched->job_scheduled); + init_waitqueue_head(&sched->done_wait_q); INIT_LIST_HEAD(&sched->pending_list); + INIT_LIST_HEAD(&sched->done_list); spin_lock_init(&sched->job_list_lock); atomic_set(&sched->hw_rq_count, 0); INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout); @@ -881,6 +867,21 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, return ret; } + snprintf(sched->thread_done_name, DRM_THREAD_NAME_LEN, "%s%s", + sched->name, "-done"); + sched->thread_done_name[DRM_THREAD_NAME_LEN - 1] = '\0'; + sched->thread_done = kthread_run(drm_sched_done, sched, + sched->thread_done_name); + if (IS_ERR(sched->thread_done)) { + ret = kthread_stop(sched->thread); + if (!ret) { + /* free_kthread_struct(sched->thread); */ + sched->thread = NULL; + } + DRM_ERROR("Failed to start thread %s", sched->thread_done_name); + return ret; + } + sched->ready = true; return 0; } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 3a5686c3b5e9..b282d6158b50 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -169,6 +169,12 @@ struct drm_sched_fence { struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); +enum drm_job_status { + DRM_JOB_STATUS_NONE = 0 << 0, + DRM_JOB_STATUS_DONE = 1 << 0, + DRM_JOB_STATUS_TIMEOUT = 1 << 1, +}; + /** * struct drm_sched_job - A job to be run by an entity. * @@ -198,6 +204,7 @@ struct drm_sched_job { uint64_t id; atomic_t karma; enum drm_sched_priority s_priority; + enum drm_job_status job_status; struct drm_sched_entity *entity; struct dma_fence_cb cb; }; @@ -284,15 +291,22 @@ struct drm_gpu_scheduler { uint32_t hw_submission_limit; long timeout; const char *name; + char thread_done_name[DRM_THREAD_NAME_LEN]; + struct drm_sched_rq sched_rq[DRM_SCHED_PRIORITY_COUNT]; wait_queue_head_t wake_up_worker; wait_queue_head_t job_scheduled; + wait_queue_head_t done_wait_q; atomic_t hw_rq_count; atomic64_t job_id_count; struct delayed_work work_tdr; struct task_struct *thread; + struct task_struct *thread_done; + struct list_head pending_list; + struct list_head done_list; spinlock_t job_list_lock; + int hang_limit; atomic_t score; bool ready;