From patchwork Fri Dec 4 03:17:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Luben Tuikov X-Patchwork-Id: 11950571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48277C4361A for ; Fri, 4 Dec 2020 03:17:48 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B95D822513 for ; Fri, 4 Dec 2020 03:17:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B95D822513 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 408DB6E11E; Fri, 4 Dec 2020 03:17:41 +0000 (UTC) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2082.outbound.protection.outlook.com [40.107.100.82]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2D9686E120; Fri, 4 Dec 2020 03:17:40 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K75/6amUBhBvFkcLVQ2UtNyBZuDhBaTQy+eMcb0RxnJxlYopfRZkGtpMYS/cZqQk8KGN1PjCAbSuM+/D/QihxVDEx7+tVIMNi/RQn3k1htqAaxyBWlCdm/aHONE5IEjNviCpGsC1UyhzikStyqYdVrvoRdUKc5oxGcaiLKeMpNW73TgZagMTCYaYRtXdQKfW1uSEfFtb6UAobEEof2L57emBKjC//BJDXQ1iHPefhMiqqT/C4X13K7g4m1qSM095PS+cMGo8A5Qrj0ivsgf3EujefF+IFuIImjK9f0/qnm6P7BpvvLLazeu0nRxPOL8u8IXux90nv3CmUMTv/rEBDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nv7Kb4kirg6TaHq0Y3aJ38nd3q3TPqG7keWroKcJN+U=; b=b6bF0ttu8gj7dfyzuwVLBFLUEQdPu2NQf4P0R1RjOhnuqXbNTr5h5VAurLdTLG3Zfjv0Q6LBCf/gx/ezgFTDFQfU//9KDtoL6EW59b1dekQtBNpB84Qzqd53SlbWNEnnli2eBKox3pQmfBBVCbuHMEMsom7091EWKNVcONwXICc7DH1lSnoXIc/Ytuf3M7lWnHV4xmRn/893L50yI5xv+Qvf6B/1JL18cafHLvIjHitRzvcGUomNdI3v4/ZEhgNxpYfHbCJRcqfbUuVNa3YdyBmWUFlREtLNKBFWrl7WgfOL2EBmIu0pT2e9QamDJ+YdHldZp+dGE8iUdlu3kzNFLA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nv7Kb4kirg6TaHq0Y3aJ38nd3q3TPqG7keWroKcJN+U=; b=at86S4NtGuDX2eJrnrTEbvXAamyXUwdF432IiG23anl3YLycI9YIavpphPavIx9b2zfi3sVzCSmG+q0hiaadCCtXj0G8w3/M0337xSiL+rPeKhCEbxIoHQPRpaO4j82qpBmIq2/4u3I8sNaUyLPYPgrcG+KcoX+fpION2REc38U= Authentication-Results: lists.freedesktop.org; dkim=none (message not signed) header.d=none; lists.freedesktop.org; dmarc=none action=none header.from=amd.com; Received: from DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) by DM6PR12MB4043.namprd12.prod.outlook.com (2603:10b6:5:216::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Fri, 4 Dec 2020 03:17:38 +0000 Received: from DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56]) by DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56%6]) with mapi id 15.20.3632.021; Fri, 4 Dec 2020 03:17:38 +0000 From: Luben Tuikov To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 1/5] drm/scheduler: "node" --> "list" Date: Thu, 3 Dec 2020 22:17:18 -0500 Message-Id: <20201204031722.24040-2-luben.tuikov@amd.com> X-Mailer: git-send-email 2.29.2.404.ge67fbf927d In-Reply-To: <20201204031722.24040-1-luben.tuikov@amd.com> References: <20201204031722.24040-1-luben.tuikov@amd.com> X-Originating-IP: [165.204.55.250] X-ClientProxiedBy: CH2PR02CA0026.namprd02.prod.outlook.com (2603:10b6:610:4e::36) To DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain.amd.com (165.204.55.250) by CH2PR02CA0026.namprd02.prod.outlook.com (2603:10b6:610:4e::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend Transport; Fri, 4 Dec 2020 03:17:37 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: b4bed67e-c364-4a10-5449-08d8980326a0 X-MS-TrafficTypeDiagnostic: DM6PR12MB4043: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1247; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FaYTXM7zyzqIjTqWataCZ7eQpEbk2Dr+4hO2yd1XfI7s9QlhyGu9U1MO2r2QbwSp3FE0QGOE50Ta/G+JIUNJsfwnGkwuLb6x6FvV/W3uLgE4fAm0y0DYMFxgaW0MZ02gGnv9s1OBQrwofE5VPvnPaRit9cv2zLsQpVShftoUx0KyI8+dm+sLk/8NhFO5yF/dKTXKV4+lxOc3Qp+vT4VIVSGZHJg7a9wdn8sa7rqdZ44efVaKUSvTXoKw3HuwsadcJc+p4I076IOgcVaWZEhMZOo6VJE4jMtcR09QUxDajq8XNAm5Edv7XpjYw4+ajl7vO+qp0XH6gMgzxtNl81ceNg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3962.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(39860400002)(366004)(346002)(136003)(396003)(16526019)(5660300002)(66946007)(956004)(1076003)(6486002)(2906002)(66556008)(66476007)(4326008)(6666004)(2616005)(83380400001)(186003)(36756003)(8676002)(86362001)(8936002)(26005)(44832011)(54906003)(7696005)(66574015)(52116002)(478600001)(316002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: =?utf-8?q?uvVOAzbytdIzweOSNH/FmiE4wTc28K?= =?utf-8?q?GMNFC6FrFraVx8njUxgsbHVdegfb5nU0CkZb/EawxRJNeM+Ld9FtHlzkMENDAOwdf?= =?utf-8?q?NEA0QtmrrflTpMdwV92hQ0qjJ4/fkkneLp0AvVBWRGjj0qCLS+aom/dwSuITD8Zdo?= =?utf-8?q?hvbQlLHeEQ/6EBRStELYcP+iyuNL/Rn3/ULMagGaJnIO8RUf7BlMYU97aXUOouC1b?= =?utf-8?q?En1c2xWLQoPBD9lKkV2S3VrIiiZcT+1FZZeCR3kbWp1X+DwJLAbKdVTntBek82sKJ?= =?utf-8?q?p8Wc+xrPSKbOjCjdEIjnG9vVrglvAvZqw20IVuNmZI/BEIHNaw0+87u9sc+dnOgIB?= =?utf-8?q?5PCkRMF50BVUk8gVdgyXjkaYvbsVbO6xwxT23Tb5F3AbiDDGP7InZ8b6idd8HYwIl?= =?utf-8?q?d6qgWrTorSRMAVvisSTan2nZlg5dRjtdWX+nFnwTlMUFbz9iSc9cHmv3H7crxHitY?= =?utf-8?q?wgx/LzwY6uJxE+qhQRdXpkrg1t8pK8O7PdnyRTWleBrHdznUlvO1uA5NUQ702vo5e?= =?utf-8?q?die0YOR+pSuSt1COfsWpzA4ZW86ExV/1VzjEXE/VPVfiahI7IwDIAnhZXns4o0hI0?= =?utf-8?q?T2v02dcMqyFbYS83Svb1DN4a5obMJr8ee7e7ASUaACBjiz7HN9nOUpp+rIiIQzZZu?= =?utf-8?q?Gb4joNFqdnFstmZo2XT7BmFiFNu3qgRb5n5qrxezVjvzDZQHoTkPTVSWOZnnCNGyI?= =?utf-8?q?dPAGnQE+QyTKIY+Rq6ePdU/wWLIMR8MCDp4sJy6D8wrgAo3Oer0eKdFItK/JwHYJk?= =?utf-8?q?hYld0k5DS/dTTDv3Xgvz8IFDTn/8gf9CrhOBtKOHDjZy/h7n4zkjUw1bdLnJ/ezFK?= =?utf-8?q?VItEz4y4a7A7tseltgsU1ww923zrps6EZF1xLA56CtdGOd8FhmFBZ9dU0J7ZX/rA6?= =?utf-8?q?NgmrmBFSjJRlxDCbg/gitcioERKGfRsymGi1eCfyDLLYkjCPj5556pL5iIoxEGxqA?= =?utf-8?q?7oLTuXpu0I5KCVIJQhK?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: b4bed67e-c364-4a10-5449-08d8980326a0 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3962.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Dec 2020 03:17:37.4559 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: o1Xnz7yvQGtjSDKtYwqPtehV4cXyQU9g4NzKIvkkT1xIqHZXSdJnZMIfxLqqsqe4 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4043 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexander Deucher , Luben Tuikov , =?utf-8?q?Christian_K=C3=B6nig?= , Daniel Vetter Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rename "node" to "list" in struct drm_sched_job, in order to make it consistent with what we see being used throughout gpu_scheduler.h, for instance in struct drm_sched_entity, as well as the rest of DRM and the kernel. Signed-off-by: Luben Tuikov Reviewed-by: Christian König Cc: Alexander Deucher Cc: Andrey Grodzovsky Cc: Christian König Cc: Daniel Vetter --- drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 23 +++++++++++---------- include/drm/gpu_scheduler.h | 4 ++-- 5 files changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c index 5c1f3725c741..8358cae0b5a4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c @@ -1427,7 +1427,7 @@ static void amdgpu_ib_preempt_job_recovery(struct drm_gpu_scheduler *sched) struct dma_fence *fence; spin_lock(&sched->job_list_lock); - list_for_each_entry(s_job, &sched->ring_mirror_list, node) { + list_for_each_entry(s_job, &sched->ring_mirror_list, list) { fence = sched->ops->run_job(s_job); dma_fence_put(fence); } @@ -1459,10 +1459,10 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring) no_preempt: spin_lock(&sched->job_list_lock); - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { + list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { if (dma_fence_is_signaled(&s_job->s_fence->finished)) { /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); + list_del_init(&s_job->list); sched->ops->free_job(s_job); continue; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 7560b05e4ac1..4df6de81cd41 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -4128,7 +4128,7 @@ bool amdgpu_device_has_job_running(struct amdgpu_device *adev) spin_lock(&ring->sched.job_list_lock); job = list_first_entry_or_null(&ring->sched.ring_mirror_list, - struct drm_sched_job, node); + struct drm_sched_job, list); spin_unlock(&ring->sched.job_list_lock); if (job) return true; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index dcfe8a3b03ff..aca52a46b93d 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -271,7 +271,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) } /* Signal all jobs already scheduled to HW */ - list_for_each_entry(s_job, &sched->ring_mirror_list, node) { + list_for_each_entry(s_job, &sched->ring_mirror_list, list) { struct drm_sched_fence *s_fence = s_job->s_fence; dma_fence_set_error(&s_fence->finished, -EHWPOISON); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index c6332d75025e..c52eba407ebd 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -272,7 +272,7 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job) struct drm_gpu_scheduler *sched = s_job->sched; spin_lock(&sched->job_list_lock); - list_add_tail(&s_job->node, &sched->ring_mirror_list); + list_add_tail(&s_job->list, &sched->ring_mirror_list); drm_sched_start_timeout(sched); spin_unlock(&sched->job_list_lock); } @@ -287,7 +287,7 @@ static void drm_sched_job_timedout(struct work_struct *work) /* Protects against concurrent deletion in drm_sched_get_cleanup_job */ spin_lock(&sched->job_list_lock); job = list_first_entry_or_null(&sched->ring_mirror_list, - struct drm_sched_job, node); + struct drm_sched_job, list); if (job) { /* @@ -295,7 +295,7 @@ static void drm_sched_job_timedout(struct work_struct *work) * drm_sched_cleanup_jobs. It will be reinserted back after sched->thread * is parked at which point it's safe. */ - list_del_init(&job->node); + list_del_init(&job->list); spin_unlock(&sched->job_list_lock); job->sched->ops->timedout_job(job); @@ -392,7 +392,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) * Add at the head of the queue to reflect it was the earliest * job extracted. */ - list_add(&bad->node, &sched->ring_mirror_list); + list_add(&bad->list, &sched->ring_mirror_list); /* * Iterate the job list from later to earlier one and either deactive @@ -400,7 +400,8 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) * signaled. * This iteration is thread safe as sched thread is stopped. */ - list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, node) { + list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, + list) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, &s_job->cb)) { @@ -411,7 +412,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) * Locking here is for concurrent resume timeout */ spin_lock(&sched->job_list_lock); - list_del_init(&s_job->node); + list_del_init(&s_job->list); spin_unlock(&sched->job_list_lock); /* @@ -462,7 +463,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) * so no new jobs are being inserted or removed. Also concurrent * GPU recovers can't run in parallel. */ - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { + list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { struct dma_fence *fence = s_job->s_fence->parent; atomic_inc(&sched->hw_rq_count); @@ -505,7 +506,7 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) bool found_guilty = false; struct dma_fence *fence; - list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { + list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, list) { struct drm_sched_fence *s_fence = s_job->s_fence; if (!found_guilty && atomic_read(&s_job->karma) > sched->hang_limit) { @@ -565,7 +566,7 @@ int drm_sched_job_init(struct drm_sched_job *job, return -ENOMEM; job->id = atomic64_inc_return(&sched->job_id_count); - INIT_LIST_HEAD(&job->node); + INIT_LIST_HEAD(&job->list); return 0; } @@ -684,11 +685,11 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) spin_lock(&sched->job_list_lock); job = list_first_entry_or_null(&sched->ring_mirror_list, - struct drm_sched_job, node); + struct drm_sched_job, list); if (job && dma_fence_is_signaled(&job->s_fence->finished)) { /* remove job from ring_mirror_list */ - list_del_init(&job->node); + list_del_init(&job->list); } else { job = NULL; /* queue timeout for next job */ diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 92436553fd6a..3add0072bd37 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -189,14 +189,14 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); */ struct drm_sched_job { struct spsc_node queue_node; + struct list_head list; struct drm_gpu_scheduler *sched; struct drm_sched_fence *s_fence; struct dma_fence_cb finish_cb; - struct list_head node; uint64_t id; atomic_t karma; enum drm_sched_priority s_priority; - struct drm_sched_entity *entity; + struct drm_sched_entity *entity; struct dma_fence_cb cb; };