From patchwork Thu Dec 6 17:41:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10716527 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5EAB614E2 for ; Thu, 6 Dec 2018 17:41:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 455F72EF77 for ; Thu, 6 Dec 2018 17:41:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 37C122EF7B; Thu, 6 Dec 2018 17:41:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A32582EF77 for ; Thu, 6 Dec 2018 17:41:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2D7366E627; Thu, 6 Dec 2018 17:41:41 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-eopbgr820055.outbound.protection.outlook.com [40.107.82.55]) by gabe.freedesktop.org (Postfix) with ESMTPS id D7BCB6E628; Thu, 6 Dec 2018 17:41:39 +0000 (UTC) Received: from BN6PR1201CA0020.namprd12.prod.outlook.com (2603:10b6:405:4c::30) by DM3PR12MB0841.namprd12.prod.outlook.com (2a01:111:e400:5985::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1404.19; Thu, 6 Dec 2018 17:41:38 +0000 Received: from BY2NAM03FT042.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e4a::203) by BN6PR1201CA0020.outlook.office365.com (2603:10b6:405:4c::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.17 via Frontend Transport; Thu, 6 Dec 2018 17:41:37 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by BY2NAM03FT042.mail.protection.outlook.com (10.152.85.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.17 via Frontend Transport; Thu, 6 Dec 2018 17:41:37 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.389.1; Thu, 6 Dec 2018 11:41:35 -0600 From: Andrey Grodzovsky To: , , , , Subject: [PATCH 2/2] drm/sched: Rework HW fence processing. Date: Thu, 6 Dec 2018 12:41:14 -0500 Message-ID: <1544118074-24910-2-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1544118074-24910-1-git-send-email-andrey.grodzovsky@amd.com> References: <1544118074-24910-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(376002)(396003)(136003)(39860400002)(2980300002)(428003)(189003)(199004)(68736007)(104016004)(97736004)(105586002)(53416004)(478600001)(48376002)(50466002)(72206003)(54906003)(16586007)(8676002)(81156014)(50226002)(81166006)(36756003)(8936002)(2906002)(39060400002)(4326008)(53936002)(446003)(51416003)(26005)(2201001)(186003)(106466001)(2616005)(126002)(476003)(76176011)(11346002)(7696005)(77096007)(86362001)(305945005)(316002)(110136005)(5660300001)(14444005)(336012)(426003)(6666004)(47776003)(486006)(356004)(44832011)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:DM3PR12MB0841; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; MX:1; A:1; X-Microsoft-Exchange-Diagnostics: 1; BY2NAM03FT042; 1:JAmqcbnk2eHy8xUlJOYQZCoE+91rgCViRrXPckrkA9MRDPPNpOfawzJc77npTXYV+rj90nlXVQ/Y1ilpUSee5l813OYKEWk9PvFNicvgxeZG4bxgL1a15IgNmntd1xb/ X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f554fde3-8b39-4bb4-6553-08d65ba2129a X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390098)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(2017052603328)(7153060); SRVR:DM3PR12MB0841; X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 3:pOyNju4R321WfgMDX1lBf355coUAAxP7m0i1mBbBQlCqKpWGXTkkxArKWXYZTNMUCiJDA9SUY95eTgp+NI3DrMfnbmxdA76ZV0y8CQnPj3PLPyEfxKGXoynLe/JQTfvHpPpzJs2STGve5rAx8YNlRymS7DV+yQgcFF4CCrH8EkMT3cPHqh2trtIESAym6boEaVC3yKRW68jBKbBnx4p1HQrt5K44HXyf3XnXePjzW/c8bgghMSsqeAuU4xhqRjLj1jrX1gm+ovNEG61WTUTyH8SNzNnK7v2fie0JDd40/OQG0XQ0GgEsVEDQ/JRB+wKWpgYpUxpBy4m06n94zdKl8qbXDCrSyACiOG+TiAWhojI=; 25:5drmgmpBHr+hllrvcJPZJmD+DJsCL0rWkEgyuseITzDj8we+vkp8ftgXWT+jLgZsGIiUgLI155vSioH0GsChkjDf5aaGBKZnnZj4dwF7lXx7JvzgwIfrpDzlDBBkE/G0h4t1VOBSVBZ1ZGoBfORn277255ZLt1Ufz0X37FchuJbNL/psHmCfeUlf4+RmWM+79zJWbxa4mqC4ExXMthDjN6BKvAzspW7afQRLLDIhK7ap2WnRCfmgQXNrx6Ywh8f3zYG6+KZ2s5ZfcCACInRgTY8rIyeNMvwves/pHDWHwWe3jyngNUFUHwsizHJTVB0n8/I2U7O4dT+o0Z8t9/c36A== X-MS-TrafficTypeDiagnostic: DM3PR12MB0841: X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 31:q48ZjuCja+Hha5gnWqSezfxpxDBGO62ueFwRLEJIMF5GmkF2/kRx42ajb032uhTlP3FG7QGZsaNNkA2yy/84GLdLyqS/nsTLkW4I07Sx1Gf6nSsrQE4aN3gO5grY6/k/1ouKIpjR+EnK1zg6CG2osRXzOlSyKee++DEfaJp5HLt0gkJb8s8kgN1GW9rYZEtPC7VJXQcLkuFsqJ+pI8LRS9rpsKpDd02GMViziNYJkOY=; 20:mam3rSzMF+dW/LrHjtfak7bIbpQjT1xocLi1Oz9wetcvaIEXkDGxvxm1oFEjcTE0MjTHr+ZSteh+tgr9yY2mnrEFmNL919Rxd8mNNjK2qddxBTr9pHU29dxeMdXKpSzVJTTCY+phq64slmlxRhpVMXtR6jqtbx2bsU2TNzOiXEQ/AZoeTlUWNPh3NCp1G+FoOWrJojaCxe7H1phG+LGXheMuqxidENzACFYEnLdgFqC6W1l04KvzyZMidkMtbMG5AmiEEnKIKBnM0z1dgWIkA9KqNIXp+pPEaevcL1RLLE/DC8SK8v+d+dCqXvMLBQtYAwsPSwsUYCXDJwLXd60JennyrmsjfLg5yToeHYSgqCBugU/MECIk6NpZDAM4vr60loqIJh1RmXxrJG55eIKcq5BwgXHAhSl3z2tjQn0cQbxmCCagYaCOxasmjtSLI0ocgFvNpK1Un/+DejOvxdie5McaKS1/0CQBBkzOSuyV+EyBC6kouVkAuG/737hLes7F X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3002001)(93006095)(93003095)(3231455)(999002)(944501520)(52105112)(10201501046)(6055026)(148016)(149066)(150057)(6041310)(20161123558120)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(201708071742011)(7699051)(76991095); SRVR:DM3PR12MB0841; BCL:0; PCL:0; RULEID:; SRVR:DM3PR12MB0841; X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 4:myD3E4/E/yVbW8ZTtqJtzjbW5IWLgTKJrO7k9tP6ECjHHljym8PKoZvSVNn7D6uxx+wGjBNHDh/Y2o/OSD9a4IFKIyNj4c8o/k2BsGnv8+fzYfdxyapTz+4R4v3AbCZcqgyzkjQULP7+azGfwhgSn2sd37ZawAHiIg7XfWyOXObAR+Yk6+5epc/+RX+Ff3/Tybw4cZdkxvyqVpRNcJEfOhiX2n/BvYEVsu21XQBOcNuDHAXAf0wBxZeNY6cRvlmgnaZg3JYdKJPxf/Uz8b3AfQ== X-Forefront-PRVS: 087894CD3C X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM3PR12MB0841; 23:UqVqlITWoTQFkSA3bsMYaX05HdL0R1a6vTJ+GDr5U?= mDhGVmQ2xNZ0noyNbm7wHm/E+D8HW0bmUg4qqhPQ2sg2gjXZ7uzLfgVGumRnv3KKIYgXO4AR1VV1E+38fHLM0hex3lWZhN1i5U8RE9IHz4qDkdqWSw/nyqs8Ppcm2fT8PHqsiZ1K32+vv/Oh8VCSMt1lU1OY58MgiDVBr3QZRsIQJVuU4LFqMEHHUOCk5+iMoA11E+C/WTCVQtQrnFBKyeixI4dHZTwyAnDoiv87Jp2BshKZucnfjTWe/lkz2xfUbhJRWK532P7Uf5wTmZnrTI4Q14Wf5vK2wjw8iK+2Q3ubgPcOFVha8+ZmA+fKSIxV2k/7Mo76z5+6XmNLtiEefffjGoZ8KG6VGBhvrt03eZNh0gimvQh5/8/YShk4/EN4Oz36NEuhs8inQH5NrnEIuYhAjF5MGYFIGKxtYD7LY+mpwsQqzv/bVkengrVYHzu3LlpILBRUEID7F6LTaXeM9CHOG+bM/tRpGVndbKWjUucDxMgae1J924707EG32pBIxqRkBUYgGceQBP/tjWiiMr35SIdq51cQajxVCTPnEiqarSzqF9Pvzt56ro3eFfBpqkwd3PUohfDO77lJ2B8asz3w3VEzFNXFPNmOMcbdckP9VqbEg2oLd8qt95SFrKfGWwnQas464J6FQdzqPqrRA4HV01XDU8fkQWvgIHKDMttJnO5+nMX4zNLYv7jo0EZuFDGs5ku7xlULDFLJ/bSUFWosVz8eadz2VRdKvira5dYpfmh+GbFSPGohmijUlWlY3jQaIEKaASI+2BvRQ7KnEza8rt9myFW1Pvi/lOiJyQncmYVf7p0lGRrnH2dret/UKuEEC7crRe4GT4cV33KKiSQQainJ5Oxmiww+Z2zotaaffFxTvJB+7u+orhHULizI8CjgTSThFm+475KEFooJkgU8XRRBkEgYyR1PQU4ALEsKdxkRXsIF4gUi/pGN2CBkis40qSDBurx/FgCdd1SADmDLPd8oEQhblPvREeKpWrYJg22r6T8RlQovHPv9q/Ibtik8CKhW0oNmMuAmOJKXYT0daPm3vRR2tHg4W5e3J9V6FnAAESef9QfZWOI985dTZ51d2Rqy/yYMzvqxXdezfUsGVgHmD6oy0i32taMyBO03aCZfPy/FFAYXH/0Ik6/KKIDgS9pOguUC0kHcDSfEpX0uUeoHpDXmAnVmQt7Z22NFrFWwfMZp1ad+ZvX4P011smXVbsOyk6LiWh1alCa5QyA X-Microsoft-Antispam-Message-Info: Q78jt7/zfxyUy/jx3zddGph7jZzKmZoGKNi7Rn7oPNR+y5WyINNJWQ6pFiWg0q+0AKzyE3mweLxi+AH5anj0CLXRW84l+fi/5Jy5bgpup80KjN4BK0oFkFxTTuRhM6QuutrBaY0KL8RckC06h5bj6PMn1HIWHe0W92iQX2uFMdPoG8tI7/ZSeD660mVmf73OUxoZ/pL99YcxXofkq9pMQi0g0mjhmdu17A2Y3+RNak5gdCg341rg9ygQuH3rz9q0Slh203E5CthgNnswYTAxace23KFbPNHVNinlpFIdJ/1BhZxRDFBvMF9JFlDD+4/2/K2t8gGnbobsjgBphRBI1bIlke2/81khBKAHXdhPw7U= X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 6:yDGg0zlD2W9htPVet4SjbLZtrxB1kP7WO+2iLbLHj0bOr0OFw7HMsqOibPX44C038rZQHimTXaXe4I/cQwmw9XRVwNLRa75xT38PYKGjy6rLTJPqm4Edk93OrPpcOBxDi3rob8A1u8TgR8jDGzwQ7xv0ddufdR7OxCH3ByOL2w65Ej0LYnAKoh6smXR01+IuUHAk2qhbXP5TSXMUhrEuRuETMx9M4SysDnXg6IL8TIVkQoCbVoVU0BtNFXJqxWnc5NqaQAOojHBlaVjIn9c7nyziER2cfJa/pq6N4fOpCEJbvUCVPevnctbfr/44vx9B46aSCdUb3Vi4gjoGqQFNB8acliOEZ6ijRc+J+3bxkBNpClw9INX5iqHqVXcGzsBwidm3IvCLBXuC0vVR48xdrYXSNzgr6QmT9fN3m8w9sAxMrIn0VDkhCEg2RvjWqXGKysOF1/hax3kP4LLEXjinLg==; 5:1fW2NnVIOHyZc3NJ7KSlUTnENWESFn3mAx4E3Ue0hy1zH8ee8JtbOyelj0tQ/tXsK5MYGKocCTFmeYy2g/98rF509LCZEindbx1GcZH+GWa2UjQcf0xaw890vylTxJ6rJmIMOjOW3JhH0fyBPr2n8uNfJYh4LADwGfPZd2ZZK8A=; 7:PuAZT3pTD+my0iDLpim46teeJ5Ps6sfIBliOrez8Ue/4vWzS9MoL5gsDIey7FoMRBD85Cdbcrn4CUoPgvxBMDN3hL3QaLQsebbvMPWHrKV2wj1/qkjyG/Qmrmr03/TZzeOuj4bbb7mRCYSAEj+FvIQ== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 20:DQujzLhwi2Ru7xHCCZmxIRLmd+i2iF3MG2HWQ7uHYg4GdqRbH8PpIiXo0ENFmfbDy/cTgd+/+vDDWU/2xYjFEkgwyd6zYWMuEwTUL3lzPWzNloVfeq7VEXpEt3ruq1lxaCIKrhTZqvu1C1hVB+9f3myRK8I8OI6ei2a104fztehLEcNaeCO8sg22chkKR53zRoVXVwURTu85oNsSBMnMI0y48flsXqJwGB8+s2yQRBI5mt55YF5YTXfQxFMWh8qs X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Dec 2018 17:41:37.2566 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f554fde3-8b39-4bb4-6553-08d65ba2129a X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM3PR12MB0841 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Monk.Liu@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Expedite job deletion from ring mirror list to the HW fence signal callback instead from finish_work, together with waiting for all such fences to signal in drm_sched_stop we garantee that already signaled job will not be processed twice. Remove the sched finish fence callback and just submit finish_work directly from the HW fence callback. Suggested-by: Christian Koenig Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/sched_fence.c | 4 +++- drivers/gpu/drm/scheduler/sched_main.c | 39 ++++++++++++++++----------------- include/drm/gpu_scheduler.h | 10 +++++++-- 3 files changed, 30 insertions(+), 23 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index d8d2dff..e62c239 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -151,7 +151,8 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) EXPORT_SYMBOL(to_drm_sched_fence); struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, - void *owner) + void *owner, + struct drm_sched_job *s_job) { struct drm_sched_fence *fence = NULL; unsigned seq; @@ -163,6 +164,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, fence->owner = owner; fence->sched = entity->rq->sched; spin_lock_init(&fence->lock); + fence->s_job = s_job; seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 8fb7f86..2860037 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -284,31 +284,17 @@ static void drm_sched_job_finish(struct work_struct *work) cancel_delayed_work_sync(&sched->work_tdr); spin_lock_irqsave(&sched->job_list_lock, flags); - /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); - /* queue TDR for next job */ drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); sched->ops->free_job(s_job); } -static void drm_sched_job_finish_cb(struct dma_fence *f, - struct dma_fence_cb *cb) -{ - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, - finish_cb); - schedule_work(&job->finish_work); -} - static void drm_sched_job_begin(struct drm_sched_job *s_job) { struct drm_gpu_scheduler *sched = s_job->sched; unsigned long flags; - dma_fence_add_callback(&s_job->s_fence->finished, &s_job->finish_cb, - drm_sched_job_finish_cb); - spin_lock_irqsave(&sched->job_list_lock, flags); list_add_tail(&s_job->node, &sched->ring_mirror_list); drm_sched_start_timeout(sched); @@ -418,13 +404,17 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool unpark_only) { struct drm_sched_job *s_job, *tmp; bool found_guilty = false; - unsigned long flags; int r; if (unpark_only) goto unpark; - spin_lock_irqsave(&sched->job_list_lock, flags); + /* + * Locking the list is not required here as the sched thread is parked + * so no new jobs are being pushed in to HW and in drm_sched_stop we + * flushed any in flight jobs who didn't signal yet. Also concurrent + * GPU recovers can't run in parallel. + */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { struct drm_sched_fence *s_fence = s_job->s_fence; struct dma_fence *fence; @@ -453,7 +443,6 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool unpark_only) } drm_sched_start_timeout(sched); - spin_unlock_irqrestore(&sched->job_list_lock, flags); unpark: kthread_unpark(sched->thread); @@ -505,7 +494,7 @@ int drm_sched_job_init(struct drm_sched_job *job, job->sched = sched; job->entity = entity; job->s_priority = entity->rq - sched->sched_rq; - job->s_fence = drm_sched_fence_create(entity, owner); + job->s_fence = drm_sched_fence_create(entity, owner, job); if (!job->s_fence) return -ENOMEM; job->id = atomic64_inc_return(&sched->job_id_count); @@ -593,15 +582,25 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) struct drm_sched_fence *s_fence = container_of(cb, struct drm_sched_fence, cb); struct drm_gpu_scheduler *sched = s_fence->sched; + struct drm_sched_job *s_job = s_fence->s_job; + unsigned long flags; + + cancel_delayed_work(&sched->work_tdr); - dma_fence_get(&s_fence->finished); atomic_dec(&sched->hw_rq_count); atomic_dec(&sched->num_jobs); + + spin_lock_irqsave(&sched->job_list_lock, flags); + /* remove job from ring_mirror_list */ + list_del_init(&s_job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + drm_sched_fence_finished(s_fence); trace_drm_sched_process_job(s_fence); - dma_fence_put(&s_fence->finished); wake_up_interruptible(&sched->wake_up_worker); + + schedule_work(&s_job->finish_work); } /** diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index c94b592..23855c6 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -115,6 +115,8 @@ struct drm_sched_rq { struct drm_sched_entity *current_entity; }; +struct drm_sched_job; + /** * struct drm_sched_fence - fences corresponding to the scheduling of a job. */ @@ -160,6 +162,9 @@ struct drm_sched_fence { * @owner: job owner for debugging */ void *owner; + + /* Back pointer to owning job */ + struct drm_sched_job *s_job; }; struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); @@ -330,8 +335,9 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); -struct drm_sched_fence *drm_sched_fence_create( - struct drm_sched_entity *s_entity, void *owner); +struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *s_entity, + void *owner, + struct drm_sched_job *s_job); void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence);