From patchwork Fri Dec 4 03:17:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Luben Tuikov X-Patchwork-Id: 11950573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77A1BC4361A for ; Fri, 4 Dec 2020 03:17:51 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 190B222513 for ; Fri, 4 Dec 2020 03:17:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 190B222513 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D24EA6E125; Fri, 4 Dec 2020 03:17:41 +0000 (UTC) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2082.outbound.protection.outlook.com [40.107.100.82]) by gabe.freedesktop.org (Postfix) with ESMTPS id CDEA96E11E; Fri, 4 Dec 2020 03:17:40 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eB0NqW++Q7b3G+VXGKZ8jYRtPTI7Pla+KkTK4UD0SiHe2R01BY8IFDXHTHCgOi379CecZpAaQLnJLXk6uJuBDohpIffthmPRIz3K/Oz73+OJXROjIZJcyg+P3H2cGayMjGC+Z05BYtih7lPCE6EA84hTI3X4tM+1+gvajg4oImvGvo9e1IGV/Y8pO6ts/5MpxNdN0Ywmlgb8k7GSZHTDqlqVULQXCLEDSPAUalNFEUYweVWIiehE4HnwXccJCtNRbZsByIdV9Tvl9nyoZf0tBX5XX4RzpjYX2lq1hiTxolEZcSFWjDMyRiAqo3ScpHkR5dbvvWb8PMtX3EwwhWH6lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SnsDhxszv9HhT5Ts196O2pMQYrjEFYirIkqCH0nIlCo=; b=VE6MAHi7yYJTh+R1Hivqed1nounxJ/VFjFwTuMN86d9BKm9E5/kEViqi2E1OcOQVlrjtqYdx7tsm2H4RNjD1HCIVUXdQruQFBTL36V9vhjMGFJoeqMRAoX1jlyLloGMN/v6u1TnTif8Eoyx86O0gngudcoDbRHDZii1a1fkyY/PlTyHfgBP4H3A5OIgoTVZR51WAT2qFlH/IUD/QE60oR9tbsugNw21x/qLhDCUwF9XLAY+vB3OxX3gEo0d+A4YznvrkedWETELQeMWIvJKIgj0Wcwl4syzeap3TpMhbEiXgst1QWwX2rwyMxuUuOvhyJvuVTMiIXZrDqrTjtOLi9g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SnsDhxszv9HhT5Ts196O2pMQYrjEFYirIkqCH0nIlCo=; b=A3HsQcnRDHv7C9deZrmexZqW3CxRf7yOztATpPV/c8klDvvpR243sLETntz4ogkUErfYEatj6b7kPYf9T8t96K8V785I8NY8dR/EC1MVExOVLOPI/FVcNM9PqwvZseMDAPlp4s2qBGrkR7eq6gFLu+yaIe/1FLKJn7kjr5Ptu+k= Authentication-Results: lists.freedesktop.org; dkim=none (message not signed) header.d=none; lists.freedesktop.org; dmarc=none action=none header.from=amd.com; Received: from DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) by DM6PR12MB4043.namprd12.prod.outlook.com (2603:10b6:5:216::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Fri, 4 Dec 2020 03:17:39 +0000 Received: from DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56]) by DM6PR12MB3962.namprd12.prod.outlook.com ([fe80::d055:19dc:5b0f:ed56%6]) with mapi id 15.20.3632.021; Fri, 4 Dec 2020 03:17:39 +0000 From: Luben Tuikov To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 4/5] drm/scheduler: Job timeout handler returns status (v2) Date: Thu, 3 Dec 2020 22:17:21 -0500 Message-Id: <20201204031722.24040-5-luben.tuikov@amd.com> X-Mailer: git-send-email 2.29.2.404.ge67fbf927d In-Reply-To: <20201204031722.24040-1-luben.tuikov@amd.com> References: <20201204031722.24040-1-luben.tuikov@amd.com> X-Originating-IP: [165.204.55.250] X-ClientProxiedBy: CH2PR02CA0026.namprd02.prod.outlook.com (2603:10b6:610:4e::36) To DM6PR12MB3962.namprd12.prod.outlook.com (2603:10b6:5:1ce::21) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain.amd.com (165.204.55.250) by CH2PR02CA0026.namprd02.prod.outlook.com (2603:10b6:610:4e::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend Transport; Fri, 4 Dec 2020 03:17:38 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 1f9298cf-7730-4ceb-e413-08d8980327c5 X-MS-TrafficTypeDiagnostic: DM6PR12MB4043: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1107; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: z9WKvuDjeUDQkmispDdhGXx0ows7R/QymolnbBg+5n51hlcnkj6SNA6nHyzJ37lA8MR1hQL5kFYJnhE5aIYaGDV96v8ey+uok4Ose1tMyXquxDx0AAR2+JI3wTaiEPLOUsPIlFIC1HlQRGOZeLsB9BE3DqZ11euWGvevm0Ky/KeXPVjNYwWRi/Yjp/cZuHMXInKCnSlaU6qJrrae0C+Jme+dQMkRH21cEtt8V42Nl5u8zfxwiHYTaNDajCpVxs6B321CRTNav4sd+MTXMWZrAIDfpKRkrTigXwCAPY1USu8c3qNLhrFa2ueMKw50i7RzFqxsRzWYitKICN1u5MWM5w== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3962.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(39860400002)(366004)(346002)(136003)(396003)(16526019)(30864003)(5660300002)(66946007)(956004)(1076003)(6486002)(2906002)(66556008)(66476007)(4326008)(6666004)(2616005)(83380400001)(186003)(36756003)(8676002)(86362001)(8936002)(26005)(44832011)(54906003)(7696005)(7416002)(66574015)(52116002)(478600001)(316002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: =?utf-8?q?eeY4yzTwuCnr3CdyOexY/8ashRo2AR?= =?utf-8?q?jOGiMiwpBMRIpRxEShUWE/x+Qp3D3AzQvvIqMDK8OQo+FIfTsgyriKJngabhdRHl0?= =?utf-8?q?jgPbj67NCAmeYeEhr1bJ8PLfA4g2IqaMZi+bqBZZqREEgG+dIYdy2zsISLUpCCZy3?= =?utf-8?q?wt8nLjhvtyVgt/uImGvvQckd44kgHnJfFG6CWcZfwt2iqDOVvrsRmCBLWt8In6DQg?= =?utf-8?q?Ji1EUARJoI0n9vC2pTHcjIRFX183M4COy2oQHjhYI/3SWLmf2D+5I8lHyF15PEh2i?= =?utf-8?q?gTqK2UPlMQS35TZ/CZAN18UrKcep7rd8XUi4GYv0lgVefDLVMfthmwXGw/YlLsjJp?= =?utf-8?q?pL1eFQyzzDmc7mSaufQdRZEFFHaouIylrKb5nEZzo7YjC9rpulRNOyPF5ufwf/SGk?= =?utf-8?q?7df5OqpMuh+fGdInY9qgi97iPW2btbAEGXAwGOIP4Q2W2HAbHMgR15ZZnI9cIjH5s?= =?utf-8?q?Vh7Rn3ec5AOnrg7d3yDm/1ZCoxzPQN8Yu48feOP5JMUPkeNKeBf1rYnk8HRQFtDio?= =?utf-8?q?fEF1/Li8gUv1PFf1HzYCabzhDJL8adpjyj0mRHADwsr7NzXFioRqFOOQbTOrUykac?= =?utf-8?q?swdcuQ23Wq4HVlphMnV1vZ84JGP+o1obuy2FrshOhT8VROn2H4498cOW4J+t7weTP?= =?utf-8?q?RBTL1TwOa8teYOq480jIEu6TDLZ4Vq1LJJUX9PpsXB289NU1JvVnk/Rob0DDIGEG3?= =?utf-8?q?f09W9sRrLqLeFho5LQ3c4nLrrHYPooRe8KxlQnbSu/5FvTzxOczjexWBiLQSSmbZt?= =?utf-8?q?IMrdcStKaEvPUhAn9QzVrYuuLZpXzCVLM3CtcDRUbja34mtVUwC7kOrgmHXtNzX+p?= =?utf-8?q?9OSaJlXtkyuKJ2EHJCknqonmWbLix9k9Zi4H2tMH4HpCMLHUO4Xn9t6rzdRRb4B+5?= =?utf-8?q?8vWf3MWP7LxXbqLyP6M8/BTgfsQWBm8YjKn5xL7O94KPajsHX0YbEgu3b98WHE/EF?= =?utf-8?q?JzRQMiephScf2MjuHw+?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1f9298cf-7730-4ceb-e413-08d8980327c5 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3962.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Dec 2020 03:17:39.3868 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: j03H3iQa9tQLcNgA1iB9tIuFU6NsLl3dUuResTqLjY94Tn2N5OfT+gXaNAtzOaa2 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4043 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kernel test robot , Tomeu Vizoso , Daniel Vetter , Alyssa Rosenzweig , Steven Price , Luben Tuikov , Qiang Yu , Russell King , Alexander Deucher , =?utf-8?q?Christian_K=C3=B6n?= =?utf-8?q?ig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The driver's job timeout handler now returns status indicating back to the DRM layer whether the task (job) was successfully aborted or whether more time should be given to the task to complete. Default behaviour as of this patch, is preserved, except in obvious-by-comment case in the Panfrost driver, as documented below. All drivers which make use of the drm_sched_backend_ops' .timedout_job() callback have been accordingly renamed and return the would've-been default value of DRM_TASK_STATUS_ALIVE to restart the task's timeout timer--this is the old behaviour, and is preserved by this patch. In the case of the Panfrost driver, its timedout callback correctly first checks if the job had completed in due time and if so, it now returns DRM_TASK_STATUS_COMPLETE to notify the DRM layer that the task can be moved to the done list, to be freed later. In the other two subsequent checks, the value of DRM_TASK_STATUS_ALIVE is returned, as per the default behaviour. A more involved driver's solutions can be had in subequent patches. Signed-off-by: Luben Tuikov Reported-by: kernel test robot Cc: Alexander Deucher Cc: Andrey Grodzovsky Cc: Christian König Cc: Daniel Vetter Cc: Lucas Stach Cc: Russell King Cc: Christian Gmeiner Cc: Qiang Yu Cc: Rob Herring Cc: Tomeu Vizoso Cc: Steven Price Cc: Alyssa Rosenzweig Cc: Eric Anholt v2: Use enum as the status of a driver's job timeout callback method. --- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 6 +++-- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 10 +++++++- drivers/gpu/drm/lima/lima_sched.c | 4 +++- drivers/gpu/drm/panfrost/panfrost_job.c | 9 ++++--- drivers/gpu/drm/scheduler/sched_main.c | 4 +--- drivers/gpu/drm/v3d/v3d_sched.c | 32 +++++++++++++------------ include/drm/gpu_scheduler.h | 20 +++++++++++++--- 7 files changed, 57 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index ff48101bab55..a111326cbdde 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -28,7 +28,7 @@ #include "amdgpu.h" #include "amdgpu_trace.h" -static void amdgpu_job_timedout(struct drm_sched_job *s_job) +static enum drm_task_status amdgpu_job_timedout(struct drm_sched_job *s_job) { struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched); struct amdgpu_job *job = to_amdgpu_job(s_job); @@ -41,7 +41,7 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job) amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) { DRM_ERROR("ring %s timeout, but soft recovered\n", s_job->sched->name); - return; + return DRM_TASK_STATUS_ALIVE; } amdgpu_vm_get_task_info(ring->adev, job->pasid, &ti); @@ -53,10 +53,12 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job) if (amdgpu_device_should_recover_gpu(ring->adev)) { amdgpu_device_gpu_recover(ring->adev, job); + return DRM_TASK_STATUS_ALIVE; } else { drm_sched_suspend_timeout(&ring->sched); if (amdgpu_sriov_vf(adev)) adev->virt.tdr_debug = true; + return DRM_TASK_STATUS_ALIVE; } } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index cd46c882269c..c49516942328 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -82,7 +82,8 @@ static struct dma_fence *etnaviv_sched_run_job(struct drm_sched_job *sched_job) return fence; } -static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job) +static enum drm_task_status etnaviv_sched_timedout_job(struct drm_sched_job + *sched_job) { struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); struct etnaviv_gpu *gpu = submit->gpu; @@ -120,9 +121,16 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job) drm_sched_resubmit_jobs(&gpu->sched); + /* Tell the DRM scheduler that this task needs + * more time. + */ + drm_sched_start(&gpu->sched, true); + return DRM_TASK_STATUS_ALIVE; + out_no_timeout: /* restart scheduler after GPU is usable again */ drm_sched_start(&gpu->sched, true); + return DRM_TASK_STATUS_ALIVE; } static void etnaviv_sched_free_job(struct drm_sched_job *sched_job) diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index 63b4c5643f9c..66d9236b8760 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -415,7 +415,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) mutex_unlock(&dev->error_task_list_lock); } -static void lima_sched_timedout_job(struct drm_sched_job *job) +static enum drm_task_status lima_sched_timedout_job(struct drm_sched_job *job) { struct lima_sched_pipe *pipe = to_lima_pipe(job->sched); struct lima_sched_task *task = to_lima_task(job); @@ -449,6 +449,8 @@ static void lima_sched_timedout_job(struct drm_sched_job *job) drm_sched_resubmit_jobs(&pipe->base); drm_sched_start(&pipe->base, true); + + return DRM_TASK_STATUS_ALIVE; } static void lima_sched_free_job(struct drm_sched_job *job) diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 04e6f6f9b742..845148a722e4 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -432,7 +432,8 @@ static void panfrost_scheduler_start(struct panfrost_queue_state *queue) mutex_unlock(&queue->lock); } -static void panfrost_job_timedout(struct drm_sched_job *sched_job) +static enum drm_task_status panfrost_job_timedout(struct drm_sched_job + *sched_job) { struct panfrost_job *job = to_panfrost_job(sched_job); struct panfrost_device *pfdev = job->pfdev; @@ -443,7 +444,7 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) * spurious. Bail out. */ if (dma_fence_is_signaled(job->done_fence)) - return; + return DRM_TASK_STATUS_COMPLETE; dev_err(pfdev->dev, "gpu sched timeout, js=%d, config=0x%x, status=0x%x, head=0x%x, tail=0x%x, sched_job=%p", js, @@ -455,11 +456,13 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) /* Scheduler is already stopped, nothing to do. */ if (!panfrost_scheduler_stop(&pfdev->js->queue[js], sched_job)) - return; + return DRM_TASK_STATUS_ALIVE; /* Schedule a reset if there's no reset in progress. */ if (!atomic_xchg(&pfdev->reset.pending, 1)) schedule_work(&pfdev->reset.work); + + return DRM_TASK_STATUS_ALIVE; } static const struct drm_sched_backend_ops panfrost_sched_ops = { diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 3eb7618a627d..b9876cad94f2 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -526,7 +526,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) EXPORT_SYMBOL(drm_sched_start); /** - * drm_sched_resubmit_jobs - helper to relunch job from pending ring list + * drm_sched_resubmit_jobs - helper to relaunch jobs from the pending list * * @sched: scheduler instance * @@ -560,8 +560,6 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) } else { s_job->s_fence->parent = fence; } - - } } EXPORT_SYMBOL(drm_sched_resubmit_jobs); diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index 452682e2209f..3740665ec479 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -259,7 +259,7 @@ v3d_cache_clean_job_run(struct drm_sched_job *sched_job) return NULL; } -static void +static enum drm_task_status v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job) { enum v3d_queue q; @@ -285,6 +285,8 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job) } mutex_unlock(&v3d->reset_lock); + + return DRM_TASK_STATUS_ALIVE; } /* If the current address or return address have changed, then the GPU @@ -292,7 +294,7 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job) * could fail if the GPU got in an infinite loop in the CL, but that * is pretty unlikely outside of an i-g-t testcase. */ -static void +static enum drm_task_status v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q, u32 *timedout_ctca, u32 *timedout_ctra) { @@ -304,39 +306,39 @@ v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q, if (*timedout_ctca != ctca || *timedout_ctra != ctra) { *timedout_ctca = ctca; *timedout_ctra = ctra; - return; + return DRM_TASK_STATUS_ALIVE; } - v3d_gpu_reset_for_timeout(v3d, sched_job); + return v3d_gpu_reset_for_timeout(v3d, sched_job); } -static void +static enum drm_task_status v3d_bin_job_timedout(struct drm_sched_job *sched_job) { struct v3d_bin_job *job = to_bin_job(sched_job); - v3d_cl_job_timedout(sched_job, V3D_BIN, - &job->timedout_ctca, &job->timedout_ctra); + return v3d_cl_job_timedout(sched_job, V3D_BIN, + &job->timedout_ctca, &job->timedout_ctra); } -static void +static enum drm_task_status v3d_render_job_timedout(struct drm_sched_job *sched_job) { struct v3d_render_job *job = to_render_job(sched_job); - v3d_cl_job_timedout(sched_job, V3D_RENDER, - &job->timedout_ctca, &job->timedout_ctra); + return v3d_cl_job_timedout(sched_job, V3D_RENDER, + &job->timedout_ctca, &job->timedout_ctra); } -static void +static enum drm_task_status v3d_generic_job_timedout(struct drm_sched_job *sched_job) { struct v3d_job *job = to_v3d_job(sched_job); - v3d_gpu_reset_for_timeout(job->v3d, sched_job); + return v3d_gpu_reset_for_timeout(job->v3d, sched_job); } -static void +static enum drm_task_status v3d_csd_job_timedout(struct drm_sched_job *sched_job) { struct v3d_csd_job *job = to_csd_job(sched_job); @@ -348,10 +350,10 @@ v3d_csd_job_timedout(struct drm_sched_job *sched_job) */ if (job->timedout_batches != batches) { job->timedout_batches = batches; - return; + return DRM_TASK_STATUS_ALIVE; } - v3d_gpu_reset_for_timeout(v3d, sched_job); + return v3d_gpu_reset_for_timeout(v3d, sched_job); } static const struct drm_sched_backend_ops v3d_bin_sched_ops = { diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 2e0c368e19f6..cedfc5394e52 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -206,6 +206,11 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, return s_job && atomic_inc_return(&s_job->karma) > threshold; } +enum drm_task_status { + DRM_TASK_STATUS_COMPLETE, + DRM_TASK_STATUS_ALIVE +}; + /** * struct drm_sched_backend_ops * @@ -230,10 +235,19 @@ struct drm_sched_backend_ops { struct dma_fence *(*run_job)(struct drm_sched_job *sched_job); /** - * @timedout_job: Called when a job has taken too long to execute, - * to trigger GPU recovery. + * @timedout_job: Called when a job has taken too long to execute, + * to trigger GPU recovery. + * + * Return DRM_TASK_STATUS_ALIVE, if the task (job) is healthy + * and executing in the hardware, i.e. it needs more time. + * + * Return DRM_TASK_STATUS_COMPLETE, if the task (job) has + * been aborted or is unknown to the hardware, i.e. if + * the task is out of the hardware, and maybe it is now + * in the done list, or it was completed long ago, or + * if it is unknown to the hardware. */ - void (*timedout_job)(struct drm_sched_job *sched_job); + enum drm_task_status (*timedout_job)(struct drm_sched_job *sched_job); /** * @free_job: Called once the job's finished fence has been signaled