From patchwork Thu Dec 27 19:28:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10744033 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5414C6C5 for ; Thu, 27 Dec 2018 19:28:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 42EDC287E6 for ; Thu, 27 Dec 2018 19:28:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3588D289AB; Thu, 27 Dec 2018 19:28:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A2EC2287E6 for ; Thu, 27 Dec 2018 19:28:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0DB806E813; Thu, 27 Dec 2018 19:28:29 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM04-SN1-obe.outbound.protection.outlook.com (mail-eopbgr700049.outbound.protection.outlook.com [40.107.70.49]) by gabe.freedesktop.org (Postfix) with ESMTPS id C46146E80D; Thu, 27 Dec 2018 19:28:27 +0000 (UTC) Received: from DM5PR12CA0070.namprd12.prod.outlook.com (2603:10b6:3:103::32) by BN8PR12MB3474.namprd12.prod.outlook.com (2603:10b6:408:46::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1471.20; Thu, 27 Dec 2018 19:28:26 +0000 Received: from CO1NAM03FT054.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e48::209) by DM5PR12CA0070.outlook.office365.com (2603:10b6:3:103::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1471.20 via Frontend Transport; Thu, 27 Dec 2018 19:28:25 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by CO1NAM03FT054.mail.protection.outlook.com (10.152.81.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1471.13 via Frontend Transport; Thu, 27 Dec 2018 19:28:25 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Thu, 27 Dec 2018 13:28:23 -0600 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v6 2/2] drm/sched: Rework HW fence processing. Date: Thu, 27 Dec 2018 14:28:07 -0500 Message-ID: <1545938887-22901-2-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545938887-22901-1-git-send-email-andrey.grodzovsky@amd.com> References: <1545938887-22901-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(396003)(136003)(376002)(39860400002)(346002)(2980300002)(428003)(199004)(189003)(6666004)(53416004)(2906002)(426003)(36756003)(8676002)(5660300001)(8936002)(356004)(86362001)(336012)(186003)(104016004)(51416003)(77096007)(7696005)(76176011)(68736007)(81166006)(81156014)(26005)(47776003)(106466001)(105586002)(305945005)(446003)(44832011)(11346002)(97736004)(50226002)(39060400002)(478600001)(316002)(48376002)(50466002)(5024004)(14444005)(54906003)(110136005)(476003)(2201001)(16586007)(486006)(126002)(4326008)(72206003)(53936002)(2616005)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:BN8PR12MB3474; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-Microsoft-Exchange-Diagnostics: 1; CO1NAM03FT054; 1:DXNVmzQk576UYnmzkR77fPrEyObSvSaEJB8AiGeKlz4YSXZ1biOUHpuT9wDZ6QGuYpbksz4iIE/obXerNJGpSatQ+rfh4+bDkGW/nxsw03cL5HDqXhQ9aJzuqaBVMNCt X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e8201694-b069-47e5-8c0b-08d66c3178bb X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(2017052603328)(7153060); SRVR:BN8PR12MB3474; X-Microsoft-Exchange-Diagnostics: 1; BN8PR12MB3474; 3:DkPe8lBtmwhbKT4U4GpdPevYpf/nGAMs7tnotZZYI9mR+LcYwSYBiMH5Oh2xsdLAkz6WktMpUO7h8QuvFIm1cJcov35Xf7YLfyuoU5iW8tZsmpZ9VtHHLSmHhjibIh3DBHqt6rqUfbi7jFf+ynIEin0cfV+pB3Jsr8zxLgGsWC2NOaSoQtKmpnB1syh/QxGSp+lo/6IhJExXlHeD06U5mdWYIMqa9FxDcEB8gx6rwD14OkvuLyu42TO+cDkvf3ANckJzGJkWsU4zzTTCmie+R3lI/sl8yP4IbzuBbYVUjrWnmpG2iu5oosRtJ0ZCyRsKTtWVcfHjAFBOEBI3zM6T/BEG1vi9lvF4l7/1KwwLKqQ=; 25:vfVyYz/vmi7gvSCpCnNCIVuu3Bo+8Nrm5WAXWpOhyLOakuO/DhD5DN0H9MlxF3wSQUi4sg/Uecb91TQ5EN0YDemjpHjqQuJXyc2riPzxy+UMkgHmkHgZdiwIsGjgjN8/qAZDdhHryOqPmXmQgTQQBkLc81mpKxk2Ou0nY7AW2p1WnaYWuGjGGMl3QD0pYJOxXQBtjQFoidd1QDcNYrjAEKGB3ZuDqgqHuKPj3biyt/D8QdJLB9aQR7Amp9m/8CpkxKRoigcbPm1YvU1GmPqhXzdBVDLJCiXOqr7VKUmkFQj+ZBSNHDpFcIWGxEze2+semcT5ZcAM7fF+/WWQHHq9qA== X-MS-TrafficTypeDiagnostic: BN8PR12MB3474: X-Microsoft-Exchange-Diagnostics: 1; BN8PR12MB3474; 31:xFJd6Fj3Z7WqdhW5DH2HMKo/alo5V3ozydZCS634o088eJisyIjmiD61MP+lf84LQwxxpcwRsuVovno6DdrS+EEO+sSUSCpTBR4xsHVPFuDr8WijT2kQeIP1ae0Aiy3hmYqzanQMmw0TXgF+V/7up5FxUWLs7qLdYijy+Nm/2rmWygv1XPVVOm/QoG6uQIZDFhQ8mSEcoSXJt8ICmWdRF/iTwUNaZKhwMZZOLBCDyc4=; 20:dYzn1UdOrjTl7Faw3QXqZ99Y1Qt8MCbSHCcd4iNx4DCiAfsCK8pG8CMRyGOi+7i888kWQHJfngQEIhxTJ5F2jqbQdyuj5vZ1hQecYpDGdXIXDmmAO892cinsgE+OZqHHVqSlC4wo4MIw4T8xt407kiDB9TavRtg3x0AvP9Ij1UQbm6qbxtQi7E1O0h18zgYgbO3JiDciBCvXZhGIy052876788vdA0N1E9WUds/RQQ8Tf9yiR9TlrVX17y5hX2Y9Z2P6WBNLeDCUEepagsxhFlkbugPkoqk2Y2+OWW2DoVztFsdsfo99VoHq421zXOpi9RfqTFGDh3JXSyPxKrigQlBgGVQe+LvQPUELPINdEhJQqZ4noujkuunIy3dBQLJ5y13OEDviNoFrpfBmVzubt3wlZAdBHYVJ5am7904hoMle/OWfhyccGzDuBqICOV5ZCRSA/DRlhVquHqDPjgq+TDzfUM1krSiYS5Df17SO0Jfv9VCybBUAGYhFUaRpwTw/ X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(3230021)(908002)(999002)(5005026)(6040522)(2401047)(8121501046)(3231475)(944501520)(52105112)(93006095)(93003095)(10201501046)(3002001)(6055026)(6041310)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(20161123560045)(20161123562045)(201708071742011)(7699051)(76991095); SRVR:BN8PR12MB3474; BCL:0; PCL:0; RULEID:; SRVR:BN8PR12MB3474; X-Microsoft-Exchange-Diagnostics: 1; BN8PR12MB3474; 4:CAAudGC+3qc36ZkiIhmn1+KRD1y6HVM2foTlvBYEcMQfnDxCWhgrc0gw44b0lZsEhLiV+PSdGLQrRAJiYli4JeW1Qbq5T+w9MXCxizK0lvshYKKnvQcbNmHDKQhykAZan1EB/EtxoypJlT8JVlEbW1Djd/c52KefVkkpfko2JoxV5rkkIrBhf4t6y8+uVyMXkJIjObX3DTDI6ErMKsxRDlqfNPgkfKGY/2PASnrd/LqVxBTvoySN6D0zGqa9Q8hb57BkLUqmIVnvR81t/qrl6w== X-Forefront-PRVS: 0899B47777 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN8PR12MB3474; 23:NMDXoIDqG2LAzKGl1fbLYA9Vru7JVK3TW3zWLdqKx?= obC9JsFJR9RG+YyAX+eg8xy+6QPfcrbA4hSZylhnrpWhQ8X7rSSvIqotpqYD2Zu86QmbMqJ21dBjd5e9/M7JAXmPzW2tB7WXaFqM2Cc9/hCJg+F/tiobK0lAsREShQ3yrINGUruWizbPWMBKihb2cq2dfqJzXlXpb9rFh5tvXjhmlJQtKrgm6wErX2hc4Y0yYZE8NHOEXrkBmJlsTZitBRpGN/xQlIBL+X01t2z3DAqrpQG91xeyIm5pOAiSp52PP97sT4nDUQBDSa2EtvlVpSaJERMF4wQj+MZNaZjtMeadcVDBWUsBdSaCUsK/3526WaWfBRpRSPvG4NxGUHr+CW2/3VBwxidBAncR/ABLBOZS5CQHUsPJDzPtcefN7KjISSMloSrAWctAdmEEOvs3KddN3opbA+5W9Z+SqYYxCsVJ0n6MgLohRUHJU2rBAfG8sSC/gW5c3EktCwKYDNu1Rp9jNlzSQuTV/A3tQIQWk1zBBKfzjyzGu8CoGZgd+VHzC5cTUBnHCpBH/yLjeXO3V7lOIRr2gG8KWw5vJHmkML78FAsV/+83k7ooGEGK1pVxGhEmO4FAquQxUUNrCtG2uZdRAoxqB7DGaBo6IYwpYF+Ri1LMlkxaf3hZ+MfXhAXAg+vcUVbCzssfZab5KppeYBad/ZtCXAwP/ReJARhv7tWlPyIFnQ/RsR11M2VsG2iSu1J8giexpyAaYDO2sU9Jb5Sic5gIkS2wSpNKcGuz7lrmwHuBKgJ76aGPV792Qb5144OyDo0en4QZd3vXNvJuC6g4bR5lYvKyabnKYS6KUCVXSuMruCBww82p9mi7lymZee45brD1aZHEvz+zdX92kcYdQ+w4Rgn5iDg1LF9N0IJs+gDUlat9jLgs9lvMDwf8G3xwffq74oxL9IrksxU/YpAqBaQDSlYJdV+KLJ77ZZjVvozKF4T60RBMIn71GqOAlFaz/QE0cO8SFAD++00wM0HKaeivMPZbA6R573Tliz8j/2LfXdyGBzUw2VUoYTfBhFO6IWLtPwvk0iyv/lnvynPdhJVelxpEbXP1vXJ3ZjpKFsLUujkboCZ73yT2Yfa7G4GlZyFakQ0zpDVp+OVbf40AwsZMaW1tUqRH2ZHRq6d9md61jb2Z0Hs8XiFFfh/hk2bKizL3Wy5nBIeewctkFrBaUtIfXBIKLB07bpgWlW4sgv6JiGiyna/7TzJx8fe1jH50H3HFXaJQhQlsRvfKUc6 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: VYuwGamyFAYDuxrvfO/G73U6Ce/MyYURyaA3jhIS+thx5IfGag5JfvYNSqbpO3YJf0KMqxurBj/H/oFQ6q9d3RemO+SNpjPzRRy65h9tO/ACsah/mkCdoL4My7Y1ZhsZzBQSAW9nTFFkeE2YRoxvOwbJ+QB4v1FO+osz+rFrULiElUWk9tc3skwPKd52XQgaxbuDYRluH/4musr1c/1vFAH0qn9fyPLnpiAhFX/6/cv9diUx42IP+MGg4D8gvYLhOEv8Z/Jq5q9yTTBuG4TdRW9FScdrhFBYF2y+8IEaJ/6RRzoB9uxcXqB917HraIiX X-Microsoft-Exchange-Diagnostics: 1; BN8PR12MB3474; 6:n+2+330l4/JYCrzkz52uO04++03N52rTIqNaSwZlV8RCgudoKl7EHAZl6RN51rnrA2g1KVErbP3kU4vwxuqTjfZ9z7kdFU4Bq93ddKIjH3eMZ8JOOdez2HpBz5hsb2erjsUfsTQSMjVMAEUjC8UWSGvvtvvFdPrR1jxSqf4uIV/3AXY+Y0NYo/ckv/3+sej7rMQiZuT62slAqidfWtTSBI2e6jIIZ3OWzo8tJMbExNEhF0nwQlhfZLnoWNIaYFLYc/Yi3gnUteW86XnQVH3rERX/EUMoYR9pQLcsGIwwShZo3FMuLHYCae0J+pPxEugm/VbCnYC0Gv8Mipfki+tqhXtUpESEausdaSdl7UJ2E75EGxT2Xc560g6+pBQTBej115aAwSs8DhhXD1S1Ud4PuU7YbLEy/GafALH22AiuqxlpQIj3ZRSr37RCv52pA7HtzLLe4izl/y4Nda8ca5bM1Q==; 5:OXGdAKYfWZuvDmVZ9wZZNCapIXE9WE3xNkiPvIQrL+bSP9yFeDR62T/PGVuYHqEEq7SFANp/scXziJctPZCokbU87eZyeIqufzM1Gd9ihok6aEU4/GlUecuKTOAkYIbsOYaWPCzftuq17TZo3+lwP3tDNz+7Lnaj1QXEIL/tmr8=; 7:Kaz8aBKZldnFBOluy0Ng7whJoPo1MBF5uKUW0mOPVxkfPd4M8WTZ6dTElssFgPKHuK6tmXy71r5k1ORKRml7bMXbz6p710IBCwzMKTDnIWX6aGy4iaOZRzp3JI1xbuKIz7L2pg1BJYb+XSkoRuGPVA== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN8PR12MB3474; 20:pRSLVSBOxUJEH3UkgNr+3WNKL2DWthUw43Jz52w9iC2Raca5bPLRbQm6gVa2LDenfc0wcJFzUA4hhyBU4w2zD6CYFRDDMFepzbXz6M5ld963x1qKDHi87wU+hKPZarKmvkBLj1/foQ1DBmgzBL1HfbuQhl4GtHcnGarHjw1BzuUpqwUwxNHrkzGGPhqavWs+G0x/poYFX5Lc1q00Ff94tjFWmXb0GeXZLEm2DNZDKKYYgdTc3+Cd3fgY3/bPVII2 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Dec 2018 19:28:25.2226 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e8201694-b069-47e5-8c0b-08d66c3178bb X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3474 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Monk.Liu@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Expedite job deletion from ring mirror list to the HW fence signal callback instead from finish_work, together with waiting for all such fences to signal in drm_sched_stop we garantee that already signaled job will not be processed twice. Remove the sched finish fence callback and just submit finish_work directly from the HW fence callback. v2: Fix comments. v3: Attach hw fence cb to sched_job v5: Rebase Suggested-by: Christian Koenig Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/sched_main.c | 59 +++++++++++++++++----------------- include/drm/gpu_scheduler.h | 6 ++-- 2 files changed, 31 insertions(+), 34 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 54e809b..58bd33a 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -284,8 +284,6 @@ static void drm_sched_job_finish(struct work_struct *work) cancel_delayed_work_sync(&sched->work_tdr); spin_lock_irqsave(&sched->job_list_lock, flags); - /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); /* queue TDR for next job */ drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); @@ -293,22 +291,11 @@ static void drm_sched_job_finish(struct work_struct *work) sched->ops->free_job(s_job); } -static void drm_sched_job_finish_cb(struct dma_fence *f, - struct dma_fence_cb *cb) -{ - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, - finish_cb); - schedule_work(&job->finish_work); -} - static void drm_sched_job_begin(struct drm_sched_job *s_job) { struct drm_gpu_scheduler *sched = s_job->sched; unsigned long flags; - dma_fence_add_callback(&s_job->s_fence->finished, &s_job->finish_cb, - drm_sched_job_finish_cb); - spin_lock_irqsave(&sched->job_list_lock, flags); list_add_tail(&s_job->node, &sched->ring_mirror_list); drm_sched_start_timeout(sched); @@ -405,7 +392,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, - &s_job->s_fence->cb)) { + &s_job->cb)) { dma_fence_put(s_job->s_fence->parent); s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); @@ -426,11 +413,11 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) if (s_job->s_fence->parent) { r = dma_fence_add_callback(s_job->s_fence->parent, - &s_job->s_fence->cb, + &s_job->cb, drm_sched_process_job); if (r == -ENOENT) drm_sched_process_job(s_job->s_fence->parent, - &s_job->s_fence->cb); + &s_job->cb); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); @@ -456,31 +443,34 @@ EXPORT_SYMBOL(drm_sched_stop); void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) { struct drm_sched_job *s_job, *tmp; - unsigned long flags; int r; if (!full_recovery) goto unpark; - spin_lock_irqsave(&sched->job_list_lock, flags); + /* + * Locking the list is not required here as the sched thread is parked + * so no new jobs are being pushed in to HW and in drm_sched_stop we + * flushed all the jobs who were still in mirror list but who already + * signaled and removed them self from the list. Also concurrent + * GPU recovers can't run in parallel. + */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { - struct drm_sched_fence *s_fence = s_job->s_fence; struct dma_fence *fence = s_job->s_fence->parent; if (fence) { - r = dma_fence_add_callback(fence, &s_fence->cb, + r = dma_fence_add_callback(fence, &s_job->cb, drm_sched_process_job); if (r == -ENOENT) - drm_sched_process_job(fence, &s_fence->cb); + drm_sched_process_job(fence, &s_job->cb); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); } else - drm_sched_process_job(NULL, &s_fence->cb); + drm_sched_process_job(NULL, &s_job->cb); } drm_sched_start_timeout(sched); - spin_unlock_irqrestore(&sched->job_list_lock, flags); unpark: kthread_unpark(sched->thread); @@ -629,18 +619,27 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) */ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) { - struct drm_sched_fence *s_fence = - container_of(cb, struct drm_sched_fence, cb); + struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); + struct drm_sched_fence *s_fence = s_job->s_fence; struct drm_gpu_scheduler *sched = s_fence->sched; + unsigned long flags; + + cancel_delayed_work(&sched->work_tdr); - dma_fence_get(&s_fence->finished); atomic_dec(&sched->hw_rq_count); atomic_dec(&sched->num_jobs); + + spin_lock_irqsave(&sched->job_list_lock, flags); + /* remove job from ring_mirror_list */ + list_del_init(&s_job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + drm_sched_fence_finished(s_fence); trace_drm_sched_process_job(s_fence); - dma_fence_put(&s_fence->finished); wake_up_interruptible(&sched->wake_up_worker); + + schedule_work(&s_job->finish_work); } /** @@ -703,16 +702,16 @@ static int drm_sched_main(void *param) if (fence) { s_fence->parent = dma_fence_get(fence); - r = dma_fence_add_callback(fence, &s_fence->cb, + r = dma_fence_add_callback(fence, &sched_job->cb, drm_sched_process_job); if (r == -ENOENT) - drm_sched_process_job(fence, &s_fence->cb); + drm_sched_process_job(fence, &sched_job->cb); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); dma_fence_put(fence); } else - drm_sched_process_job(NULL, &s_fence->cb); + drm_sched_process_job(NULL, &sched_job->cb); wake_up(&sched->job_scheduled); } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 4f21faf..62c2352 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -138,10 +138,6 @@ struct drm_sched_fence { struct dma_fence finished; /** - * @cb: the callback for the parent fence below. - */ - struct dma_fence_cb cb; - /** * @parent: the fence returned by &drm_sched_backend_ops.run_job * when scheduling the job on hardware. We signal the * &drm_sched_fence.finished fence once parent is signalled. @@ -181,6 +177,7 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * be scheduled further. * @s_priority: the priority of the job. * @entity: the entity to which this job belongs. + * @cb: the callback for the parent fence in s_fence. * * A job is created by the driver using drm_sched_job_init(), and * should call drm_sched_entity_push_job() once it wants the scheduler @@ -197,6 +194,7 @@ struct drm_sched_job { atomic_t karma; enum drm_sched_priority s_priority; struct drm_sched_entity *entity; + struct dma_fence_cb cb; }; static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,