From patchwork Mon Dec 17 19:51:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10734213 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AADA36C5 for ; Mon, 17 Dec 2018 19:52:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BCDA2A20E for ; Mon, 17 Dec 2018 19:52:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8D3B32A212; Mon, 17 Dec 2018 19:52:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0B3F02A20E for ; Mon, 17 Dec 2018 19:52:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 559396E6C3; Mon, 17 Dec 2018 19:52:22 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-eopbgr790050.outbound.protection.outlook.com [40.107.79.50]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9C83D6E6BF; Mon, 17 Dec 2018 19:52:20 +0000 (UTC) Received: from BN4PR12CA0005.namprd12.prod.outlook.com (2603:10b6:403:2::15) by MW2PR12MB2441.namprd12.prod.outlook.com (2603:10b6:907:8::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1425.22; Mon, 17 Dec 2018 19:52:17 +0000 Received: from DM3NAM03FT013.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e49::208) by BN4PR12CA0005.outlook.office365.com (2603:10b6:403:2::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1425.19 via Frontend Transport; Mon, 17 Dec 2018 19:52:17 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by DM3NAM03FT013.mail.protection.outlook.com (10.152.82.79) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1446.11 via Frontend Transport; Mon, 17 Dec 2018 19:52:16 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Mon, 17 Dec 2018 13:52:15 -0600 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v4 2/2] drm/sched: Rework HW fence processing. Date: Mon, 17 Dec 2018 14:51:55 -0500 Message-ID: <1545076315-26861-2-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545076315-26861-1-git-send-email-andrey.grodzovsky@amd.com> References: <1545076315-26861-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(39860400002)(376002)(136003)(396003)(346002)(2980300002)(428003)(199004)(189003)(76176011)(51416003)(53416004)(106466001)(53936002)(104016004)(356004)(6666004)(39060400002)(478600001)(50226002)(4326008)(81166006)(5024004)(81156014)(14444005)(8676002)(8936002)(476003)(44832011)(486006)(72206003)(305945005)(16586007)(54906003)(110136005)(186003)(316002)(68736007)(426003)(126002)(336012)(446003)(47776003)(7696005)(5660300001)(2906002)(50466002)(36756003)(86362001)(105586002)(97736004)(11346002)(48376002)(26005)(77096007)(2616005)(2201001)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:MW2PR12MB2441; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-Microsoft-Exchange-Diagnostics: 1; DM3NAM03FT013; 1:woMeWUII4KQdv3lsMwFmHlDp4c2kgl92qlZsy8tglZwHDR+JD4JSVO2sP9nY6XTwwJnZJGkmvcqkLJ02nWN3KYfHVwGrnVFvWzPBMeFCNW793wawwAOvGckVRco4ZrOB X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c8411aeb-9975-426b-4b0b-08d66459260d X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600074)(711020)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060); SRVR:MW2PR12MB2441; X-Microsoft-Exchange-Diagnostics: 1; MW2PR12MB2441; 3:7wcjAhk+Q+4UrBecRFbla6AMxJJNiV6daVMdF0uvIX7kwoHxk4WNc0vrnORnv5bRCFfhDtmdVYCFd18eoB4kRcvDHHC90Fjo21Mw3MkGGlzAiPgX7lkT6QLGqrfDQFm28+YlJj6i5fCL40Gv+6s7Uo31ZftUp3bx3LgVPVc7kRndG3xGNDaB27cfOax7Ut47xskfvglR5kSysNIAeCyKOpCYrDoS5UpxujJVy1Oqi0kNrY8c7czr3QAvdUE4/slkE0it1+mvKqmqY13jiD+kdd3YW1HO2vFHushApvklll5IIC4Jq+fbTun+dLcpFsPj9x/iEpOMG43Wj+N4q2hFRwa2ovhBqsratIMcW9dHPUc=; 25:VUlK3jQ144WPd/AL09eZx1oU7V5bL0HDmF6orQdVkbGBY6/5cZMA5N8vR7sVCxq1iIjZm2Mc0GLkiHZOaiaK+tiGKnZDepzV7SRzrcaI4g9r0qZFcN2I+M13IUYLTvqX62iHGNA0/Q1HJYCzecnONXm/OvMcMaWFHWJ3XivfgFek8jXMkgf6+Y9UGd6g244JCcWWCLrwoHSNgPy/yFhkGe9H6Tl0Ik/8iRjO9DzqVKvTGB+te6cvRyDkXGenXSTHP/ARppTPv11SK0saeaW9KuHSX0hI2/0IMyqpoeBSCjwaHH3JIsPtfrJzHQ4bQ9dZd0JrNubmEJnjbiqGQsy9yg== X-MS-TrafficTypeDiagnostic: MW2PR12MB2441: X-Microsoft-Exchange-Diagnostics: 1; MW2PR12MB2441; 31:yl4c2189Eah2DO8tU4X/vqyIeBT+Efy71OS2Q+MLkHg8P/DeMFroTkFUOjuyjFoN9hSdKMd+e+rXsDLdg3sm93XQVKS5LM1PreUxy0EUk1Dyug/gr+9qkBlaDoakkKfGyr1asIy3lPoca0xSO2DjtsbYXoegvqcKZcEZWc0s1b27ITOki0iNkGbcivpYUsSj8s0MuR//CLsjnyqg61LrcLDNUiI7Lei99c/usW/yOwk=; 20:uGnp/YENGx8KltA7QDX2+d//HQxPgeJ2R36YHGG+qI8z1/Y+G904HRAiwmdf+Guz4Cl1l2XoTfL2Wmbx3XC/gKdeQWqeG9poEGMW7YyUY9Lo6qoxGGCa3c0w8i2D1TDy0WgzmRL/yvygoc8URxamcoBMMjSxThr2MJ5gunGloB2/tkZ3PM+/tyxzHgwFwGxLZOrV5dDGdldJf98WxauDaL2q+bWl3rPkk+RwRK//FvUKjmaYl64xyoIg1ucp0a78nVjazIesvvl4t0OA/G5BLDT8Dqh0+JPqz2uGXmnpuWDw2JcEQjFeqSUWqsMDlNwkeArCtKXsc4itR4xGqN3lOEEXOhmvGtaCTh+joCn/vcw5D79DWoUK1Bhg3okSo1FJCdXlOZzWkGythPOaDDtLG6ACLJwODao+NSzvuzzC+woqD1uNVc64G1BKwCZDGEOpWq6ujf3ZkqE9pNW+f1fFYDI7BxzZ5lVIvbjAh6grAfELlRPow2Fthr4F3FdFYrxT X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(3230021)(999002)(6040522)(2401047)(8121501046)(5005006)(10201501046)(93006095)(93003095)(3002001)(3231475)(944501520)(52105112)(6055026)(148016)(149066)(150057)(6041310)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(20161123560045)(20161123558120)(201708071742011)(7699051)(76991095); SRVR:MW2PR12MB2441; BCL:0; PCL:0; RULEID:; SRVR:MW2PR12MB2441; X-Microsoft-Exchange-Diagnostics: 1; MW2PR12MB2441; 4:mW/ARi0FT1R/y65rdWd51+Krt/CcfDLhl7NtYH+ylJyijjnfWpGkUVTT+AezeiR6UIedJ3mQsBk++3SpdsmnNLhUQCX78cg6+Ualy07zJ/gm8XcEpZthDr8Dfq84h9ZtlmPELOGVQNWRtNyDe+oRIolc/zSH1kSEMgQ3vcWWuublSj1qTc+VHMn//fuKMwcGumcLzAjesHCGyEJ4yynPxHeAAYUJu31mfUrvT5kXuUMJwVMRlN56cZKUJmW2zl/T0bUgNitKPq9SMIKISfyKTQ== X-Forefront-PRVS: 08897B549D X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; MW2PR12MB2441; 23:M3wrbfMbBjTczhuKaG62L7AuyL0m/w9PggXIJHO7r?= d9OnFVLGyNWEzRTY03hGqf1StkorROytJba+jzeSt5AmJ47v+CVkQ5s9aVRniPYTSQPgcClDJKQ7Rux9kCu+O+ypOgmX2roqLKtF2bojWU6eNWGrVk6kOxbEF/2Hxc1Sp1f8/2RgeVmWsz2dZZ3DOkyuI44uryBraLvUF2UCj+YPq6/N7dBx0UltCArbCgcN/il4zI50KoPi3Wq598E+rkvslfsuRBSZux2j+J0eliI9TDPaOMUHid7ETiE8rfprb2vLTPT99cfUylSc5Vgn+xPkQwu6fUu3igm+AzDRCc+GC5QfZik/KhfzR5Bq+KIY/OPeXRXUgs6CB/PqX4BIq9NLZTZWicDQgv1RcskGYb0pS+94Tyt9Z5JYmPE/KbwdISO6knRJXMKj11PyjnTGmYoK33/awqZkQg/d13D8K6UBe55AMwQnX1iDtDxwjPzlyTH4KTgrCWrCbn5xwZfCXH9NZ1LSol1IaAUUfiiUjzMUi/nKvTu5+zLIk5pIVo2zLQcB4QaNnrb3SDwJOeqVclKJ6fJqLTDidh2uMjDwEPtHYPe02jNdWOwGflGyt5erCWNfO3smI8QIXA/vsR1qr6HY3AyHeWKXzYTD5vmQsuyEzi+ZMY14Tl0cdZpruRcJSbAJynTWV/BnCEQHNxF7MnfHYmaUmtsQe/gt9d0z+cvyms3mF78jBLwDtH9iDk0LjBniYFIeVmrFaapJjPWHpYgD5YpUzaRrCR2gOPN+3dI7Kun2x61BA4w1CUvdL6b1FKQEqvm3GvTun0uMf6yHyZqyHutNwcM1R0srDlvS3AriqvWrbHPcuhYHayt0sSAikOcvPUMkm3VyFKA4pOGbeONJMgGMZHKztVNkaswp4XzV5t2mB66lgI0iiqS1x0sxlfWwUdPM2SHI/rkn2hwygGBgBGSFRvaQHnOX/RhvchAKNcF8jYOW18b3FKuH9mVIpay1uZv+JxpKnD19Ip2rue036DUnLWytDFmfgec3ED6OxsTDyV9dmq1qCQYUgVYVef+ievDD5oQM9hqTtRuoxAPjhLB391VbqEt5MCk7RHD8nJl1jGLT9JJLQKlpCMCzC3wp1L0A0pXOm5AC+upBH9RyBlW/GjhjbdkJbUqx029LsMq17Yg0I5q+4bbtbv4j0D9YX7EkSjcOYYsGuSJR1kFQuEIkKtduNgaX1A2UuUp0sEwTrjSJZH6p/DARA7zQ58mdt/KDLfPs4G6T4wAmmYq X-Microsoft-Antispam-Message-Info: oEp+B4V5ZeEVJNexSd8xMiCF/bXIWeFL0CUC21YF9GwIW9R3iFz/6PVOiM2iyvWYgZdIgviHxo9i0rhH158Gf8FXCofgJs76KYUWoiOkgE/v1mSVjgfNsQZt5v/qkIU8YaAe1W+egJgeHbRLUF8VwsIBp7/qeDj2DlupMmJ+0laWSaikJdcVuTsGSE2cPHZE5jkmvtrxAg/FJR1wnJweFN2mA1S7m5xs9GspgNd+cFpzU86eDBGcUf+/ZIDj5Kn4hvFml2/cSgZanwqBmxra7ZqGsO/EJ1SvRrjwDddpAkhkW79Z9VLIxkstylkt5BOQ X-Microsoft-Exchange-Diagnostics: 1; MW2PR12MB2441; 6:+bklUzOi+otgEOckFbAgVbX6XzRoGR2ncD/Z07N5+4XsinvYLlmhT1175Uu1UEPeIa4DDtPXQGEMTsBjxuK2y9Zg7V7kDVTaf0/rG3hMEmul8jLfw/VCgTrYF+2hdXD86frgJup1DBL3ARY1WP4mlqS2TtxkFt7iooJ8L0b8BQ2VBUohA6ddlP0NXKBMbwhk62oQlkjJxOiE76caB6nYRbexOYvEf8NaFnn1dnidGj1/oVvRyGlUQh6yHUnS5gvJ0YNms3QU7TwErC9xtaWMs/UPX9bx/F9cVsWoL7vkmcjEijCNh5wZmmqPbBs7Nun2v9RqpjJB/STOPyJFXLu+W4QIlPry993aJ8GOLiibDKkrngB7YeffA/jkVMWZalDH8vNg2kgBp31h/2jWRm04D2Po05uwOPhiNSaPUng5efU4e5SzSN/GwnRRKVt8NdFwPOz+MNjCsiuD+uWKZjN1xg==; 5:OSt+LIexF2dqJZ6KfbKu3enjFVv63ZLDlPLiZg/pj8WT8u5bX4i0AuOfdAf5AQ5lYQWccwhNWlKrS99pmj6z/+4P5OTv9b0aNNtCvNYDpM1wAye68iWebR/lVXJhKvtvSPyolCjpFChwaS1AyVEQXDadg2yJLaJmMt5EkenGzf4=; 7:+XV2LmMPR2TGUs1pWeYj2qSLmgYArshWzagqmn8PMXyzmyhdL9bbIgJuSMGn4akX6/ttSHXoWvALrbjsGF5+LVwBzuHc+WMFbmvW5adtfe3VIkPdgDF4lKhGVFdAx8OGEUDt/gmDxSdIvc2X4ouCjw== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; MW2PR12MB2441; 20:c7/lp4EjXmulFRiTIoOp57QhiFvvu+nLjCknrnE+CshVEOiy7jU3vsWp9/N9AFiRGnmPX75YeUg7tWiP6XJBKwCYCgppnzQ60W0ZbQKMNkD3HRFVzM9xBykivshheFp03F8IwiMaug6Q5gkdCpzhPgtkLqhoVQrt3Dq19sfNI0uvkwE3L05qXjOg29tDim3wl2cLgIUWObNTs0U3dgmEvyeTdXgl6JUKfKV8pWfeA08QGRijmPatZ6kGJCIN/iZg X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Dec 2018 19:52:16.9651 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c8411aeb-9975-426b-4b0b-08d66459260d X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW2PR12MB2441 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Monk.Liu@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Expedite job deletion from ring mirror list to the HW fence signal callback instead from finish_work, together with waiting for all such fences to signal in drm_sched_stop we garantee that already signaled job will not be processed twice. Remove the sched finish fence callback and just submit finish_work directly from the HW fence callback. v2: Fix comments. v3: Attach hw fence cb to sched_job Suggested-by: Christian Koenig Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/sched_main.c | 55 +++++++++++++++++----------------- include/drm/gpu_scheduler.h | 6 ++-- 2 files changed, 29 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 1cf9541..40df9b9 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -284,8 +284,6 @@ static void drm_sched_job_finish(struct work_struct *work) cancel_delayed_work_sync(&sched->work_tdr); spin_lock_irqsave(&sched->job_list_lock, flags); - /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); /* queue TDR for next job */ drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); @@ -293,22 +291,11 @@ static void drm_sched_job_finish(struct work_struct *work) sched->ops->free_job(s_job); } -static void drm_sched_job_finish_cb(struct dma_fence *f, - struct dma_fence_cb *cb) -{ - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, - finish_cb); - schedule_work(&job->finish_work); -} - static void drm_sched_job_begin(struct drm_sched_job *s_job) { struct drm_gpu_scheduler *sched = s_job->sched; unsigned long flags; - dma_fence_add_callback(&s_job->s_fence->finished, &s_job->finish_cb, - drm_sched_job_finish_cb); - spin_lock_irqsave(&sched->job_list_lock, flags); list_add_tail(&s_job->node, &sched->ring_mirror_list); drm_sched_start_timeout(sched); @@ -389,7 +376,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, - &s_job->s_fence->cb)) { + &s_job->cb)) { dma_fence_put(s_job->s_fence->parent); s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); @@ -425,31 +412,34 @@ EXPORT_SYMBOL(drm_sched_stop); void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) { struct drm_sched_job *s_job, *tmp; - unsigned long flags; int r; if (!full_recovery) goto unpark; - spin_lock_irqsave(&sched->job_list_lock, flags); + /* + * Locking the list is not required here as the sched thread is parked + * so no new jobs are being pushed in to HW and in drm_sched_stop we + * flushed all the jobs who were still in mirror list but who already + * signaled and removed them self from the list. Also concurrent + * GPU recovers can't run in parallel. + */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { - struct drm_sched_fence *s_fence = s_job->s_fence; struct dma_fence *fence = s_job->s_fence->parent; if (fence) { - r = dma_fence_add_callback(fence, &s_fence->cb, + r = dma_fence_add_callback(fence, &s_job->cb, drm_sched_process_job); if (r == -ENOENT) - drm_sched_process_job(fence, &s_fence->cb); + drm_sched_process_job(fence, &s_job->cb); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); } else - drm_sched_process_job(NULL, &s_fence->cb); + drm_sched_process_job(NULL, &s_job->cb); } drm_sched_start_timeout(sched); - spin_unlock_irqrestore(&sched->job_list_lock, flags); unpark: kthread_unpark(sched->thread); @@ -598,18 +588,27 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) */ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) { - struct drm_sched_fence *s_fence = - container_of(cb, struct drm_sched_fence, cb); + struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); + struct drm_sched_fence *s_fence = s_job->s_fence; struct drm_gpu_scheduler *sched = s_fence->sched; + unsigned long flags; + + cancel_delayed_work(&sched->work_tdr); - dma_fence_get(&s_fence->finished); atomic_dec(&sched->hw_rq_count); atomic_dec(&sched->num_jobs); + + spin_lock_irqsave(&sched->job_list_lock, flags); + /* remove job from ring_mirror_list */ + list_del_init(&s_job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + drm_sched_fence_finished(s_fence); trace_drm_sched_process_job(s_fence); - dma_fence_put(&s_fence->finished); wake_up_interruptible(&sched->wake_up_worker); + + schedule_work(&s_job->finish_work); } /** @@ -672,16 +671,16 @@ static int drm_sched_main(void *param) if (fence) { s_fence->parent = dma_fence_get(fence); - r = dma_fence_add_callback(fence, &s_fence->cb, + r = dma_fence_add_callback(fence, &sched_job->cb, drm_sched_process_job); if (r == -ENOENT) - drm_sched_process_job(fence, &s_fence->cb); + drm_sched_process_job(fence, &sched_job->cb); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); dma_fence_put(fence); } else - drm_sched_process_job(NULL, &s_fence->cb); + drm_sched_process_job(NULL, &sched_job->cb); wake_up(&sched->job_scheduled); } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 441384c..c5bbf4e 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -138,10 +138,6 @@ struct drm_sched_fence { struct dma_fence finished; /** - * @cb: the callback for the parent fence below. - */ - struct dma_fence_cb cb; - /** * @parent: the fence returned by &drm_sched_backend_ops.run_job * when scheduling the job on hardware. We signal the * &drm_sched_fence.finished fence once parent is signalled. @@ -182,6 +178,7 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * be scheduled further. * @s_priority: the priority of the job. * @entity: the entity to which this job belongs. + * @cb: the callback for the parent fence in s_fence. * * A job is created by the driver using drm_sched_job_init(), and * should call drm_sched_entity_push_job() once it wants the scheduler @@ -199,6 +196,7 @@ struct drm_sched_job { atomic_t karma; enum drm_sched_priority s_priority; struct drm_sched_entity *entity; + struct dma_fence_cb cb; }; static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,