From patchwork Fri Dec 4 03:17:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Luben Tuikov X-Patchwork-Id: 11950575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CAD5C433FE for ; Fri, 4 Dec 2020 03:17:54 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BBAF022513 for ; Fri, 4 Dec 2020 03:17:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BBAF022513 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F41286E128; Fri, 4 Dec 2020 03:17:41 +0000 (UTC) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2082.outbound.protection.outlook.com [40.107.100.82]) by gabe.freedesktop.org (Postfix) with ESMTPS id 630886E11E; Fri, 4 Dec 2020 03:17:40 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Q1v+rxceusgkfOr/e+4nGNqa3ZgQg1vr7PeuKZnYFj0VAhU6wfhFV/hepFGxlU/yQHLy85QFWY3xzOAE7oUjRrx9DQIL3xrmdKUZikut2j/snE8t+CCwdeqqAlx2F9ARe4OhisPOf2Mf7pFn9pAGExmxFJcIqluCeWKB1wTtoIbsIGdzSJ8qatWjrfEv8WO84a27fmjMbKFUkt3Rme3hw5G5Gnr9xj8urDSmPf8VY+XMAFGQmur3t3xAgstqDrc0BV3e9Nsdc3u2UWXWkaENHa0whtztUpPl45cscPo9hscj7sVAqEY4tstlLAPn/nyeH4MH1hqq7JqnQ0aIlmKrkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OubZP40JfX9VICUO+b1r3EZ0f5TkJ1+wljPO2ThEcQk=; b=HXmCN0RbXmK/KeacjALXDgfpqCMQNisB9v7+u0jRcM9KaneswRovnzlbOo2IyTPYjVJTei8Tbokt5fDE+ep3e4J1XYBE2iqfK2KKk72rUdgn5xihUJXjsYCzRzgRgQwzzm+EhaV+YJJ02APoMEnW94+7AwCuXmdxJGrki5OsyH4JzcdQv+z0Y5wWlg2bs642ou0SDb/oQ2OH6TVpInGCd56Ui8DW9FpXGGMdwZfJZPYiqB4hN/iWzWlO8uSaVJeSO0wtgbq8z70vseFid+RXgfplWjsTE25M+IHZzZLIHQXwK9aNhZafVd1Zw5Q+QC0Rswr8njOuRJJHHR7bWbLwuQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OubZP40JfX9VICUO+b1r3EZ0f5TkJ1+wljPO2ThEcQk=; b=CwqUzJkbbjzAB3C+2oaMz/UPcgDE5slDgehgltj6MdtCsyeEjUHJY/JYOKGPYdOtwKd7r+ohsOQrg53Zm4m3S7Io5kUp5UbxiU/m3SvvRiDf1Nt8ltMq9HmuifoGRcrT/pdxyb9juf1XPWmGVlM8Zf2L0Oi90XtQ/bIEc84PtSs= Authentication-Results: lists.freedesktop.org; dkim=none (message not signed) header.d=none; lists.freedesktop.org; dmarc=none action=none header.from=amd.com; Received: from DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) by DM6PR12MB4043.namprd12.prod.outlook.com (2603:10b6:5:216::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Fri, 4 Dec 2020 03:17:38 +0000 Received: from DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56]) by DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56%6]) with mapi id 15.20.3632.021; Fri, 4 Dec 2020 03:17:38 +0000 From: Luben Tuikov To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 2/5] gpu/drm: ring_mirror_list --> pending_list Date: Thu, 3 Dec 2020 22:17:19 -0500 Message-Id: <20201204031722.24040-3-luben.tuikov@amd.com> X-Mailer: git-send-email 2.29.2.404.ge67fbf927d In-Reply-To: <20201204031722.24040-1-luben.tuikov@amd.com> References: <20201204031722.24040-1-luben.tuikov@amd.com> X-Originating-IP: [165.204.55.250] X-ClientProxiedBy: CH2PR02CA0026.namprd02.prod.outlook.com (2603:10b6:610:4e::36) To DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain.amd.com (165.204.55.250) by CH2PR02CA0026.namprd02.prod.outlook.com (2603:10b6:610:4e::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend Transport; Fri, 4 Dec 2020 03:17:37 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 383883c2-576c-4cad-b6ca-08d8980326ee X-MS-TrafficTypeDiagnostic: DM6PR12MB4043: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2657; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MCTB9Hujd49wtvAPngneoG0xy4boje/cE4DiNC9CMbWW/vHETfOF8ui+Mkb4QbgiA1BNi6geQBMpkztgky97fR4aHtVa+d/ikFEGRNNReyonYnXSmJqNzuSUU0iHIwT19PX4knjSSogy9/Y5H+wMDQB4EwWjdbUh/dkCGYzy8SaEPZ8rvX/zdOBk6O3gytrAUm1+L2LdcjH2I4R+i8nZi4DxM3i4UpWgYlWU/SyNtFNMTdfJEUVzq7r67PBit7uYcAe+OxVcOCK6B7TQCzs2TykKvoWTiFfsOSAYP7y87B9p2Rl+ku5w4lwlnffuSf2EVmChNmlLKnVv33jqWRHieA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3962.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(39860400002)(366004)(346002)(136003)(396003)(16526019)(30864003)(5660300002)(66946007)(956004)(1076003)(6486002)(2906002)(66556008)(66476007)(4326008)(6666004)(2616005)(83380400001)(186003)(36756003)(8676002)(86362001)(8936002)(26005)(44832011)(54906003)(7696005)(66574015)(52116002)(478600001)(316002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: =?utf-8?q?x1dtOnrdbZ4/qQ5ffOrjGYnPQhhD9X?= =?utf-8?q?fXeSzBZkGgzS5ugWRVQRlKfCPRLZUihZuBe/Ncv5TNPZ+oBhVv4Wfj/lP+n2C+aU0?= =?utf-8?q?fVkrJ0aTO4Nn/rLOBuf6PEVF/aOtCb1aA0GKe8bQanjaPv4xwGHxQLp2f8UL+knmo?= =?utf-8?q?n3ErYPSRn/0NlAufsDuc6MXFIaveXuF0JSJAJfuGtgplnDWwLxzILsWXJ72VM759v?= =?utf-8?q?fixuDN+OE1uqMNfSPwR2QtKQC3wR1P9f7+jWrtSvqKuuhwVzfbQTS9tjEDntn2rrv?= =?utf-8?q?zYmruWG/L6/dxu3N5ndW7TZGB7TvkVQYjAtAxuXBz862A1sclsL9/aAOfAyRbnwK3?= =?utf-8?q?Gep73iS9sczkuD1lqLfbT1IUWAoIbyLFLpeO+DWHCM/U+EPkfXocUUN3NKoWp5Y4w?= =?utf-8?q?d+6ZxeL0juZx5elWt5E93AcvgDCQioojVIfjTjErHOymnqbQtdrpv2qfgiUpZVNkm?= =?utf-8?q?jBaENhsu2IcpAGNPWD14qfyDApSz1s5+KDYfrd0xYwBQUEw3qqylDSNQNvopeMW0X?= =?utf-8?q?Favgm4SdzhHrRppU2EbpKKLX3Th7flYSN58KJDJVlfPaM2eKdirOyYCLn44mCdG/x?= =?utf-8?q?LxVuc+7Yj+jKHypZAk4ISiPlJsaYESNCSC0vTFG0T8gkMpuY4jEt9N9B5jwqSYZDO?= =?utf-8?q?aZTqjhUf+Kzi85WPG1f8xxhmr6YzfFi50tjyuhEzP3guW7YgbaMFxcQiRgRNABsbA?= =?utf-8?q?5Et+5lPMQNleA9VZMwQoMIZJtT5kLN+KVQwpEVtyb9Nv5g5CPaiBjPcgD9ikh98M6?= =?utf-8?q?6H8xzkjh0iWdeJmYPjFqP27lGLuolc4Bt8iZtTZpsnJZnc5wCeq6bI0MXKWqlP1W1?= =?utf-8?q?Hln23l/TbbsZ4iYWgX5nk7D/C0sOhgDRjwo0qLpldBGI4MWzdMe0BrTsyfkpQtJwq?= =?utf-8?q?o/XtD2W2cPc8UwIUNNJNqAJQ79Ktnk/Vy3XrtNp32uHUlzFF+o5US1GBkpR/JEC54?= =?utf-8?q?WctugnM7QAWp7UfwJxE?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 383883c2-576c-4cad-b6ca-08d8980326ee X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3962.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Dec 2020 03:17:37.9786 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: tcTLyWujOxr3dwmwxI9nZyC1Hhk6+JzUno85oOu1JniNZ87As5KbhHfL5EcpAdOK X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4043 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexander Deucher , Luben Tuikov , =?utf-8?q?Christian_K=C3=B6nig?= , Daniel Vetter Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rename "ring_mirror_list" to "pending_list", to describe what something is, not what it does, how it's used, or how the hardware implements it. This also abstracts the actual hardware implementation, i.e. how the low-level driver communicates with the device it drives, ring, CAM, etc., shouldn't be exposed to DRM. The pending_list keeps jobs submitted, which are out of our control. Usually this means they are pending execution status in hardware, but the latter definition is a more general (inclusive) definition. Signed-off-by: Luben Tuikov Acked-by: Christian König Cc: Alexander Deucher Cc: Andrey Grodzovsky Cc: Christian König Cc: Daniel Vetter --- drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 34 ++++++++++----------- include/drm/gpu_scheduler.h | 10 +++--- 5 files changed, 27 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c index 8358cae0b5a4..db77a5bdfa45 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c @@ -1427,7 +1427,7 @@ static void amdgpu_ib_preempt_job_recovery(struct drm_gpu_scheduler *sched) struct dma_fence *fence; spin_lock(&sched->job_list_lock); - list_for_each_entry(s_job, &sched->ring_mirror_list, list) { + list_for_each_entry(s_job, &sched->pending_list, list) { fence = sched->ops->run_job(s_job); dma_fence_put(fence); } @@ -1459,7 +1459,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring) no_preempt: spin_lock(&sched->job_list_lock); - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { + list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) { if (dma_fence_is_signaled(&s_job->s_fence->finished)) { /* remove job from ring_mirror_list */ list_del_init(&s_job->list); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 4df6de81cd41..fbae600aa5f9 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -4127,8 +4127,8 @@ bool amdgpu_device_has_job_running(struct amdgpu_device *adev) continue; spin_lock(&ring->sched.job_list_lock); - job = list_first_entry_or_null(&ring->sched.ring_mirror_list, - struct drm_sched_job, list); + job = list_first_entry_or_null(&ring->sched.pending_list, + struct drm_sched_job, list); spin_unlock(&ring->sched.job_list_lock); if (job) return true; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index aca52a46b93d..ff48101bab55 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -271,7 +271,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) } /* Signal all jobs already scheduled to HW */ - list_for_each_entry(s_job, &sched->ring_mirror_list, list) { + list_for_each_entry(s_job, &sched->pending_list, list) { struct drm_sched_fence *s_fence = s_job->s_fence; dma_fence_set_error(&s_fence->finished, -EHWPOISON); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index c52eba407ebd..b694df12aaba 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -198,7 +198,7 @@ EXPORT_SYMBOL(drm_sched_dependency_optimized); static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched) { if (sched->timeout != MAX_SCHEDULE_TIMEOUT && - !list_empty(&sched->ring_mirror_list)) + !list_empty(&sched->pending_list)) schedule_delayed_work(&sched->work_tdr, sched->timeout); } @@ -258,7 +258,7 @@ void drm_sched_resume_timeout(struct drm_gpu_scheduler *sched, { spin_lock(&sched->job_list_lock); - if (list_empty(&sched->ring_mirror_list)) + if (list_empty(&sched->pending_list)) cancel_delayed_work(&sched->work_tdr); else mod_delayed_work(system_wq, &sched->work_tdr, remaining); @@ -272,7 +272,7 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job) struct drm_gpu_scheduler *sched = s_job->sched; spin_lock(&sched->job_list_lock); - list_add_tail(&s_job->list, &sched->ring_mirror_list); + list_add_tail(&s_job->list, &sched->pending_list); drm_sched_start_timeout(sched); spin_unlock(&sched->job_list_lock); } @@ -286,7 +286,7 @@ static void drm_sched_job_timedout(struct work_struct *work) /* Protects against concurrent deletion in drm_sched_get_cleanup_job */ spin_lock(&sched->job_list_lock); - job = list_first_entry_or_null(&sched->ring_mirror_list, + job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list); if (job) { @@ -371,7 +371,7 @@ EXPORT_SYMBOL(drm_sched_increase_karma); * Stop the scheduler and also removes and frees all completed jobs. * Note: bad job will not be freed as it might be used later and so it's * callers responsibility to release it manually if it's not part of the - * mirror list any more. + * pending list any more. * */ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) @@ -392,15 +392,15 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) * Add at the head of the queue to reflect it was the earliest * job extracted. */ - list_add(&bad->list, &sched->ring_mirror_list); + list_add(&bad->list, &sched->pending_list); /* * Iterate the job list from later to earlier one and either deactive - * their HW callbacks or remove them from mirror list if they already + * their HW callbacks or remove them from pending list if they already * signaled. * This iteration is thread safe as sched thread is stopped. */ - list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, + list_for_each_entry_safe_reverse(s_job, tmp, &sched->pending_list, list) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, @@ -408,7 +408,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) atomic_dec(&sched->hw_rq_count); } else { /* - * remove job from ring_mirror_list. + * remove job from pending_list. * Locking here is for concurrent resume timeout */ spin_lock(&sched->job_list_lock); @@ -463,7 +463,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) * so no new jobs are being inserted or removed. Also concurrent * GPU recovers can't run in parallel. */ - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { + list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) { struct dma_fence *fence = s_job->s_fence->parent; atomic_inc(&sched->hw_rq_count); @@ -494,7 +494,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) EXPORT_SYMBOL(drm_sched_start); /** - * drm_sched_resubmit_jobs - helper to relunch job from mirror ring list + * drm_sched_resubmit_jobs - helper to relunch job from pending ring list * * @sched: scheduler instance * @@ -506,7 +506,7 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) bool found_guilty = false; struct dma_fence *fence; - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { + list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) { struct drm_sched_fence *s_fence = s_job->s_fence; if (!found_guilty && atomic_read(&s_job->karma) > sched->hang_limit) { @@ -665,7 +665,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) * * @sched: scheduler instance * - * Returns the next finished job from the mirror list (if there is one) + * Returns the next finished job from the pending list (if there is one) * ready for it to be destroyed. */ static struct drm_sched_job * @@ -675,7 +675,7 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) /* * Don't destroy jobs while the timeout worker is running OR thread - * is being parked and hence assumed to not touch ring_mirror_list + * is being parked and hence assumed to not touch pending_list */ if ((sched->timeout != MAX_SCHEDULE_TIMEOUT && !cancel_delayed_work(&sched->work_tdr)) || @@ -684,11 +684,11 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) spin_lock(&sched->job_list_lock); - job = list_first_entry_or_null(&sched->ring_mirror_list, + job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list); if (job && dma_fence_is_signaled(&job->s_fence->finished)) { - /* remove job from ring_mirror_list */ + /* remove job from pending_list */ list_del_init(&job->list); } else { job = NULL; @@ -858,7 +858,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, init_waitqueue_head(&sched->wake_up_worker); init_waitqueue_head(&sched->job_scheduled); - INIT_LIST_HEAD(&sched->ring_mirror_list); + INIT_LIST_HEAD(&sched->pending_list); spin_lock_init(&sched->job_list_lock); atomic_set(&sched->hw_rq_count, 0); INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 3add0072bd37..2e0c368e19f6 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -174,7 +174,7 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * @sched: the scheduler instance on which this job is scheduled. * @s_fence: contains the fences for the scheduling of job. * @finish_cb: the callback for the finished fence. - * @node: used to append this struct to the @drm_gpu_scheduler.ring_mirror_list. + * @node: used to append this struct to the @drm_gpu_scheduler.pending_list. * @id: a unique id assigned to each job scheduled on the scheduler. * @karma: increment on every hang caused by this job. If this exceeds the hang * limit of the scheduler then the job is marked guilty and will not @@ -203,7 +203,7 @@ struct drm_sched_job { static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, int threshold) { - return (s_job && atomic_inc_return(&s_job->karma) > threshold); + return s_job && atomic_inc_return(&s_job->karma) > threshold; } /** @@ -260,8 +260,8 @@ struct drm_sched_backend_ops { * @work_tdr: schedules a delayed call to @drm_sched_job_timedout after the * timeout interval is over. * @thread: the kthread on which the scheduler which run. - * @ring_mirror_list: the list of jobs which are currently in the job queue. - * @job_list_lock: lock to protect the ring_mirror_list. + * @pending_list: the list of jobs which are currently in the job queue. + * @job_list_lock: lock to protect the pending_list. * @hang_limit: once the hangs by a job crosses this limit then it is marked * guilty and it will be considered for scheduling further. * @score: score to help loadbalancer pick a idle sched @@ -282,7 +282,7 @@ struct drm_gpu_scheduler { atomic64_t job_id_count; struct delayed_work work_tdr; struct task_struct *thread; - struct list_head ring_mirror_list; + struct list_head pending_list; spinlock_t job_list_lock; int hang_limit; atomic_t score;