From patchwork Mon Dec 10 21:43:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10722579 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A7BA91E for ; Mon, 10 Dec 2018 21:44:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69F3F2A350 for ; Mon, 10 Dec 2018 21:44:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5E4512A359; Mon, 10 Dec 2018 21:44:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 54D3D2A350 for ; Mon, 10 Dec 2018 21:44:23 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E6F6789E08; Mon, 10 Dec 2018 21:44:21 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-eopbgr780087.outbound.protection.outlook.com [40.107.78.87]) by gabe.freedesktop.org (Postfix) with ESMTPS id DD93F89E01; Mon, 10 Dec 2018 21:44:19 +0000 (UTC) Received: from MWHPR12CA0036.namprd12.prod.outlook.com (2603:10b6:301:2::22) by DM6PR12MB2633.namprd12.prod.outlook.com (2603:10b6:5:4a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1404.24; Mon, 10 Dec 2018 21:44:18 +0000 Received: from CO1NAM03FT018.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e48::202) by MWHPR12CA0036.outlook.office365.com (2603:10b6:301:2::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.19 via Frontend Transport; Mon, 10 Dec 2018 21:44:18 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by CO1NAM03FT018.mail.protection.outlook.com (10.152.80.174) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.17 via Frontend Transport; Mon, 10 Dec 2018 21:44:17 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Mon, 10 Dec 2018 15:44:15 -0600 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v3 2/2] drm/sched: Rework HW fence processing. Date: Mon, 10 Dec 2018 16:43:58 -0500 Message-ID: <1544478238-13310-2-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1544478238-13310-1-git-send-email-andrey.grodzovsky@amd.com> References: <1544478238-13310-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(136003)(396003)(376002)(346002)(39860400002)(2980300002)(428003)(189003)(199004)(14444005)(104016004)(110136005)(81156014)(5024004)(39060400002)(50466002)(44832011)(105586002)(48376002)(575784001)(86362001)(36756003)(97736004)(72206003)(81166006)(8936002)(478600001)(106466001)(54906003)(2201001)(50226002)(8676002)(5660300001)(316002)(16586007)(47776003)(77096007)(53416004)(51416003)(7696005)(68736007)(76176011)(53936002)(336012)(2906002)(126002)(476003)(2616005)(356004)(486006)(11346002)(305945005)(446003)(186003)(6666004)(4326008)(26005)(426003)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:DM6PR12MB2633; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; MX:1; A:1; X-Microsoft-Exchange-Diagnostics: 1; CO1NAM03FT018; 1:ZNhHGYVYdPWsBNC2VwCQpz2RFlXU+v5/J3WR4frPLDJyzCC/sbHBVi3ckK9EJT38wbu+nJItkC6CrjOBsFRNDX9uxD9mIIg/w8mVy59OeBAe1ZUs5vvFlRB1BYNL49iB X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 26ad6e71-04e3-4bd3-959d-08d65ee8a31a X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390098)(7020095)(4652040)(8989299)(5600074)(711020)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060); SRVR:DM6PR12MB2633; X-Microsoft-Exchange-Diagnostics: 1; DM6PR12MB2633; 3:v1L2cHJqXdWWVX9KWVmWCAFX91LcsRGhG8nIAvtXU5t9hzLD+utiJTLLNf0m125HDAPeys2bJMRFXKINsjjTPpqaCQJI4MM7uZOuXjAHQpcsHdRrZTisrXTPAvs2OYgYWsNG9niSfp9Z+YWlYYgFMrsrPUQnTuEhi51XQn6tDFPbY0S0qqImNu4Ufv76EmDx/U5/DjN4Ib2SVs4NICoN6ntv/Q6xiWGU+sJKSGVf/+TAZSh3hH0izdamG6crcS8lGOIJncqE/sGFVic2JnxX7P/Ku9Fjwp/KNOVfIkTbCFISV1Ox/dBEfcDNIlb7ToeZeMsLadxbo41a6v9ybVh2wCD1DS0LeMzoCzGaM9kyjfw=; 25:7RvE8Tf/w4eI3ll5+mRKDUwl5oOWGFLefVYy13JolvqjHBgdCD1CmNCV0JMwjn9t2GD9K4rbPQnRqxPEmYF5zwy+cPYWRxsBe/MM3dLGBKfVkKFNbEZYO7nUa2LY+iGlNQTe/hnCPGlrLlzhvDDEK5CYC94MsAx79347LR8tCT6RdAMpgKpiTdnpxbuZoGTu233x6VMww0+Wy1YmVkqVhh6vhPlf67FoNjPd+W5phGAvOaJcBrJCEcGQYaA7ZPMXfvVz/Arx0PHJ3IyXILpXVycbA9NcV1gcq2dtMFa64qoso4x7Qvv4uvFvJLJ1ZhryfOG5eqgrHTN2i7E7V1HFOw== X-MS-TrafficTypeDiagnostic: DM6PR12MB2633: X-Microsoft-Exchange-Diagnostics: 1; DM6PR12MB2633; 31:PaCR+r6oWr4X1ibkv7aoPLptrrYPwJRCNpODag3wLqgFEYSfuw/psvRYsnNHkP8tGLkhcs7LsHtBTmyvOpeTKv0C/u/hbw+I73tpnt/g8XYpLuy7J/LFe3kd99hWE1t44GRuzHQdYLHG8nhy7IpOAT7lxdUGwlIDCRhhzWzIMwbmhXr5dx/L6ba4tJualpDaUW/CGcOnbxtRxyFz1vkJoZrIieVpUXfTtNdK15HHPHQ=; 20:SGCPEPkwt0fttHW03tS6ufn6JryqUlEtu4Do1WMAGvRxhdVRvzNvFclItHSQ2TjzXGHkNLDcZs+kYRS8j/TYZYlh3yJgiS8+4wbm/EV0/gHiWLCo/T7KrsRarLewxblIe5k/aDo2P/q1FOrqiC4MgaZznvgL3tIg+fpnFrv6PzqeoLT5Qf+AT5Z+TW9pg50H7XvajN7tTJB4if7tHh+FcTwFEGtlgpi6LIzhPkvdoUuWCZfuqjhNIo9InGeO9Qj/7NTySC1MDQK6SYtqw3LAv6yzzBB1+O/4ifEOVCVS2SqFTZoxvwrP6hq9xeFd9QQqdOQC9DpFvP4yftDy9OQIkP+GRwqvKMc062m0rw0oUN5WD2C+PWvaJpdkLmds+fXTjdGbyrqxin9bzZ+a8QXcv5ySjTjRK0+mt9SNoJK7RdlG9h/Woxd5uo6iKhmAyF+CTBtmq+SSEmv3Q3xOmxGvx65WXFGAXMNC949Q5N6hy8O+qfoKzDFjgNSWCSpaQWxz X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(3230017)(999002)(6040522)(2401047)(8121501046)(5005006)(3231472)(944501520)(52105112)(93006095)(93003095)(3002001)(10201501046)(6055026)(148016)(149066)(150057)(6041310)(20161123558120)(20161123562045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(201708071742011)(7699051)(76991095); SRVR:DM6PR12MB2633; BCL:0; PCL:0; RULEID:; SRVR:DM6PR12MB2633; X-Microsoft-Exchange-Diagnostics: 1; DM6PR12MB2633; 4:CA5O3jssPomv3BFZO9V3RRgE2EopquW1dnBvflK7Aebv3jP26sicS2VtwhgEh0478OEv0NN6JT9AXse0e92AKRn2xgmSVqck1mHGhRB/zACvBt5FQgVzuAcdiPVJ79E+r7XdUFhlciD51NrKuNgfrs7zp+3CwFadgcsm9uReWViAXQyrqVCWaBMYi5WCzgMp7N20gqP7bnOMTW6PxUWp+qiu9odDrWM/2RKeuQY89MoR6UKX8BtZoiWQXZUFExem/Xel3PZu+YKxZn9xyE3i7A== X-Forefront-PRVS: 08828D20BC X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM6PR12MB2633; 23:YVRmsiXgXOz5N8COiq5vUxJouZrKcdp7/ZkqOl6cI?= g91vYVoPcDZNTtGynWvi7EELAfgVJmpzIb9C9u6rdk4aCEAC2YjGXlc8WHU2uKDHlkHTMgVVgxaE+y/gbbgHL+Q9woFafAzPrDrOTvI9IURncD33WMQRWqY7CFLPwiz8tAx0qy+jDzv3SI82W5W2WCewwER3ZS90dZD8nKahnJTKo7pdAF2FV2RjSRCs6r3QQn1SEQdsFLmKrc3bhWCSjx/PdvzU33S4IpmSczbg/dIkc+Ck4Xdtp23NYjEJb83qjuTXPqurhg33o6TBX17gdndy/R72mpOnbe11TxMAepPyimAR+zYpX0zAR5nQCmBtJA2kOG/GA6QWmYIjpGTu2k7TCuQV9DqRi5zPZ+Ng8Q3yROCzW/UrGzRE58daJc6FFfyttIxeCR7I+cftO9pr2u6Bw8Tipr6Fyxh6DP9jh+B28vHACUnQ1Ee84Nzlh/KhJgXzLPYOpSyDIBx++AVe7An/W04YyLSxqf6C4EuqA6QYfiUuYzxiHmx5gxJLwogBCZnP+V5I0vZEjY8fIDV0H7/hjaHojXGIEoPmc0fMBU2BenDZXIdGNgC8UgRcAPtRYKZfUU2xAcmxR2Auu2/rgJXlRiNxIgw2qC+6AkxIBRbEABYQfgVLfgrZ02bhW2LOptG7ZQwb/fYfLwTBjb11hHixeMveppZVlcDlhJiwz/P9mg7e9lxTDCiLn6SIJtaq6dtXrrM/SuW+lyhuYkXcZe9/KYiBZ4E70oSxPE3fUD1dJL8yEwBeXF83QbTVEDo7SQ+n5GhH1YaiKZl83rZ8Mj/AC6bjwJXuXZWPutXVSDp8j7dYJyqdxQPXnL44dt9NDiQY1ycIG9ebCWJim2a0j+lkKofJUs4KqIQ+VC5EhKdTVOAKOtalvh5Pk7uYZ8HgESRIRtgeHAAhGMOqMgOTMpNj9JHY/F7sF1/FZERKGEbR+NFJadESjA4FHN7Pl5420Fyh0CcbispTBB8P/zi8gfslNn4DtjWGdQlXg22CsAFFYrher5MOljawF8pG5pcdLIr4WrBGp+9HxeyhvCGx2oFJhibSsgipG8OV9dMasHeNX4UMLMEm/cuOW/vU2WnoJTKgDCGOMYX+ydbH0BAww6Skh3Y0t8nI49NqsMyb0Ht0V07ttG0mS7RqPwMnJ2vUNbGotJG570OmQvLOw7RrKPIYWUvAPEy8Lx9hHS+HDEdUVuQ35feFxgG/wVRjVXBCBl9Y2yA9DpIz7aYYMk9BMA1KNsFqKwCq3u6fX9p4Usc102HyMIH/+/Sh2LWa6Ul+hE= X-Microsoft-Antispam-Message-Info: FO8iCxVBqop03xLKjtmd41TbfilgVd0hPMTyUAFnCQNVTLmowaByjjq9ggH1QPbg5ptq7POV3XO9IRIWDKEzEBznD7RF4fZVhPaoF3X1xqEJ+cShlL+f/7OE0T1ys3NazuVGQ7kNRwv+9KpHmIrNJ8Na4b9PhXQS6Nw1jds7BeqeJbUxiyNMNAIWq0M4eBT75W9xNL4Tj0mBOpF7mmnBBFYtok9nWdhZuf/bQc1+6JHPC1nsoiFS1e+NVrsstdBrQUPTywlnZoxq7L9JfgQ/ua2bVD1hj2qc9jRrwHzYhZW5U0/TrW7uppZbaHKg/r35 X-Microsoft-Exchange-Diagnostics: 1; DM6PR12MB2633; 6:C0CdjLYaW0DvSRg2JxRMRXobYcOAozfgSKv7rE8NJjqdGa21vIuLDYi3ZPgPA7C471eXjnjfk4HV49yisHl2Ju6yaixAC6gCZr3r24YdehuByf8Z7cyHh1K504LNKeuPhijpRTKnGR6goERvl4i5PSdojFO9rsGisOyHJ8MfhoXZQSqoJD5K+ByFLYf0OkNw02KgQdXx0RnpcdyDJFQ7Svq5i9q0Zr6FZfCR6SQGffbkIv0O3OoDZU3pb0z191+3yULm14zRWIbDpg/+V2k2SG/G3lcFhYcBtPUj3H8cjAbMXo755fgS2eD+Re4YRaTcm+POghSNDpwilhvQt8urK7MguIMy+8BEJKXKZbAY6WUohnMzP01cxIOGdZPDuOWOzYpxkh0oxFvgWDl2vjQeyLrL1NJN616QDjUvoX5xKoEQ9JXMNFrRmp8XxS4tGplXZj5UfWXtfCI7SvqbTsAylQ==; 5:LoG+JWW9VVW6phQd7F5LwL1/Xl/Pt2kC8x/+UNk+Znxwf3D05437WmUBCQV9uOQHKWXQs6c97XdJGOFJg/XT3RE0HL3DFEuZWN7lRGtlakEes9ynm0kCqglaFf7GGI3611bm35td93DcYrTn740ifGCBl2DO4hKAqkcd653c9wc=; 7:+q3oS6bX7kIda8iTLQ5Vo2J+IPQu+l+eOqDLqkHSscck5cvhNhb/FNjdh41aGsz817Tku5oD5bVuUE/cvDgx+bmjD558fMhFLd46/w3f8v11RtBBJVX4Ma9msxeNNXNud/Z97oF62oOmzoWf1PVJ7w== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; DM6PR12MB2633; 20:ufyoSkBKXfKJKy44QQcfYpRk9lCiA5A+PUteGDB5Rc+JhF5pQA1Bcp0+zLhDcvZkY5oZQh3/YCHtX3uQRiv5KhxW/Du5SHCFjuBoBArLx3qlE7ra4EzhUDF1Eqftkp7rP8FZsLRvS9+wK4UiCgkPqvJLd7F/YQsrumXrjCZUI1sVP/DFdP1B9WhGcN0V3OeOxl2lU3X4Z8jWge3TanbUkSZ6txQmZq6IRwqWvBcoOB0sS2ofTdT5VZO5gmrF8Kpw X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2018 21:44:17.9168 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 26ad6e71-04e3-4bd3-959d-08d65ee8a31a X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2633 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Monk.Liu@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Expedite job deletion from ring mirror list to the HW fence signal callback instead from finish_work, together with waiting for all such fences to signal in drm_sched_stop we garantee that already signaled job will not be processed twice. Remove the sched finish fence callback and just submit finish_work directly from the HW fence callback. v2: Fix comments. v3: Attach hw fence cb to sched_job Suggested-by: Christian Koenig Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/sched_main.c | 58 ++++++++++++++++------------------ include/drm/gpu_scheduler.h | 6 ++-- 2 files changed, 30 insertions(+), 34 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index cdf95e2..f0c1f32 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -284,8 +284,6 @@ static void drm_sched_job_finish(struct work_struct *work) cancel_delayed_work_sync(&sched->work_tdr); spin_lock_irqsave(&sched->job_list_lock, flags); - /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); /* queue TDR for next job */ drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); @@ -293,22 +291,11 @@ static void drm_sched_job_finish(struct work_struct *work) sched->ops->free_job(s_job); } -static void drm_sched_job_finish_cb(struct dma_fence *f, - struct dma_fence_cb *cb) -{ - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, - finish_cb); - schedule_work(&job->finish_work); -} - static void drm_sched_job_begin(struct drm_sched_job *s_job) { struct drm_gpu_scheduler *sched = s_job->sched; unsigned long flags; - dma_fence_add_callback(&s_job->s_fence->finished, &s_job->finish_cb, - drm_sched_job_finish_cb); - spin_lock_irqsave(&sched->job_list_lock, flags); list_add_tail(&s_job->node, &sched->ring_mirror_list); drm_sched_start_timeout(sched); @@ -359,12 +346,11 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad, list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, - &s_job->s_fence->cb)) { + &s_job->cb)) { dma_fence_put(s_job->s_fence->parent); s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); - } - else { + } else { /* TODO Is it get/put neccessey here ? */ dma_fence_get(&s_job->s_fence->finished); list_add(&s_job->finish_node, &wait_list); @@ -417,31 +403,34 @@ EXPORT_SYMBOL(drm_sched_stop); void drm_sched_start(struct drm_gpu_scheduler *sched, bool unpark_only) { struct drm_sched_job *s_job, *tmp; - unsigned long flags; int r; if (unpark_only) goto unpark; - spin_lock_irqsave(&sched->job_list_lock, flags); + /* + * Locking the list is not required here as the sched thread is parked + * so no new jobs are being pushed in to HW and in drm_sched_stop we + * flushed all the jobs who were still in mirror list but who already + * signaled and removed them self from the list. Also concurrent + * GPU recovers can't run in parallel. + */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { - struct drm_sched_fence *s_fence = s_job->s_fence; struct dma_fence *fence = s_job->s_fence->parent; if (fence) { - r = dma_fence_add_callback(fence, &s_fence->cb, + r = dma_fence_add_callback(fence, &s_job->cb, drm_sched_process_job); if (r == -ENOENT) - drm_sched_process_job(fence, &s_fence->cb); + drm_sched_process_job(fence, &s_job->cb); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); } else - drm_sched_process_job(NULL, &s_fence->cb); + drm_sched_process_job(NULL, &s_job->cb); } drm_sched_start_timeout(sched); - spin_unlock_irqrestore(&sched->job_list_lock, flags); unpark: kthread_unpark(sched->thread); @@ -590,18 +579,27 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) */ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) { - struct drm_sched_fence *s_fence = - container_of(cb, struct drm_sched_fence, cb); + struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); + struct drm_sched_fence *s_fence = s_job->s_fence; struct drm_gpu_scheduler *sched = s_fence->sched; + unsigned long flags; + + cancel_delayed_work(&sched->work_tdr); - dma_fence_get(&s_fence->finished); atomic_dec(&sched->hw_rq_count); atomic_dec(&sched->num_jobs); + + spin_lock_irqsave(&sched->job_list_lock, flags); + /* remove job from ring_mirror_list */ + list_del_init(&s_job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + drm_sched_fence_finished(s_fence); trace_drm_sched_process_job(s_fence); - dma_fence_put(&s_fence->finished); wake_up_interruptible(&sched->wake_up_worker); + + schedule_work(&s_job->finish_work); } /** @@ -664,16 +662,16 @@ static int drm_sched_main(void *param) if (fence) { s_fence->parent = dma_fence_get(fence); - r = dma_fence_add_callback(fence, &s_fence->cb, + r = dma_fence_add_callback(fence, &sched_job->cb, drm_sched_process_job); if (r == -ENOENT) - drm_sched_process_job(fence, &s_fence->cb); + drm_sched_process_job(fence, &sched_job->cb); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); dma_fence_put(fence); } else - drm_sched_process_job(NULL, &s_fence->cb); + drm_sched_process_job(NULL, &sched_job->cb); wake_up(&sched->job_scheduled); } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index c94b592..f29aa1c 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -138,10 +138,6 @@ struct drm_sched_fence { struct dma_fence finished; /** - * @cb: the callback for the parent fence below. - */ - struct dma_fence_cb cb; - /** * @parent: the fence returned by &drm_sched_backend_ops.run_job * when scheduling the job on hardware. We signal the * &drm_sched_fence.finished fence once parent is signalled. @@ -182,6 +178,7 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * be scheduled further. * @s_priority: the priority of the job. * @entity: the entity to which this job belongs. + * @cb: the callback for the parent fence in s_fence. * * A job is created by the driver using drm_sched_job_init(), and * should call drm_sched_entity_push_job() once it wants the scheduler @@ -199,6 +196,7 @@ struct drm_sched_job { atomic_t karma; enum drm_sched_priority s_priority; struct drm_sched_entity *entity; + struct dma_fence_cb cb; }; static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,