From patchwork Thu Dec 6 17:41:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10716525 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EC6AF14E2 for ; Thu, 6 Dec 2018 17:41:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D4AA52EF73 for ; Thu, 6 Dec 2018 17:41:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C7C4F2EF79; Thu, 6 Dec 2018 17:41:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0356E2EF73 for ; Thu, 6 Dec 2018 17:41:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4154A6E0B7; Thu, 6 Dec 2018 17:41:36 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM04-CO1-obe.outbound.protection.outlook.com (mail-eopbgr690064.outbound.protection.outlook.com [40.107.69.64]) by gabe.freedesktop.org (Postfix) with ESMTPS id 613A16E0B7; Thu, 6 Dec 2018 17:41:35 +0000 (UTC) Received: from MWHPR1201CA0016.namprd12.prod.outlook.com (2603:10b6:301:4a::26) by SN6PR12MB2637.namprd12.prod.outlook.com (2603:10b6:805:6f::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1382.22; Thu, 6 Dec 2018 17:41:31 +0000 Received: from BY2NAM03FT004.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e4a::207) by MWHPR1201CA0016.outlook.office365.com (2603:10b6:301:4a::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.17 via Frontend Transport; Thu, 6 Dec 2018 17:41:31 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by BY2NAM03FT004.mail.protection.outlook.com (10.152.84.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.17 via Frontend Transport; Thu, 6 Dec 2018 17:41:30 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.389.1; Thu, 6 Dec 2018 11:41:28 -0600 From: Andrey Grodzovsky To: , , , , Subject: [PATCH 1/2] drm/sched: Refactor ring mirror list handling. Date: Thu, 6 Dec 2018 12:41:13 -0500 Message-ID: <1544118074-24910-1-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(376002)(396003)(136003)(39860400002)(346002)(2980300002)(428003)(189003)(199004)(8676002)(104016004)(39060400002)(44832011)(105586002)(4744004)(48376002)(81156014)(356004)(106466001)(2906002)(81166006)(50466002)(6666004)(305945005)(5660300001)(110136005)(316002)(72206003)(16586007)(68736007)(478600001)(50226002)(86362001)(54906003)(2201001)(8936002)(4326008)(77096007)(26005)(97736004)(126002)(7696005)(51416003)(426003)(186003)(336012)(14444005)(36756003)(53416004)(486006)(53936002)(476003)(2616005)(47776003)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:SN6PR12MB2637; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; MX:1; A:1; X-Microsoft-Exchange-Diagnostics: 1; BY2NAM03FT004; 1:gulY92Q3YZP4gxGM19JmwCySUVsCQNt/7vwFifvWAmX7Y0OkAA1tJm2+Pw3wIsRljRGdBZ0ZTUQD+10pmE0kaV+1tlEoOZcB8xgIJO2f7J6YYwubhFR00EY44QHNWb44 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 082c1359-4492-4cb4-ab3e-08d65ba20e7f X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390098)(7020095)(4652040)(8989299)(5600074)(711020)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060); SRVR:SN6PR12MB2637; X-Microsoft-Exchange-Diagnostics: 1; SN6PR12MB2637; 3:9dAGRFEzOgmeEcMGpgcW5lWDiEbAHromAhsuIz9xW8xGsBLs/T9uOJv53OZHB1Qmu00tn4EU2XZi4HlDp895sEV0rmlyRDyZkP1pcAMFYwNqtLd2l68U2pNLTfLELQg3/qRMHQR8HYD/am+p/tt0ej4jQsNjNCK5WrsknjiC4u0a1ry2py0hTAYGnc2pDJJkX9iSaa6pPS7k+m2BRpkquWV4/122BGhBnYWMRFZWmNQDMl0cXvhj9v/2DnEHtTxu1OLGc4Ynq1zidckKNmKwnua2OpqtHaeUTDsQ/uSsyRvkaKxqk1hI0+6kbuMkmZvEvMqgVGmp3B0PaQKXvbdP97hKJkTdTrNhaK0fLlMqrHI=; 25:zjqJIgEHJIU1pWRUq8HBtTIVyq6t+eSwJy6jDxwnTfonwXDHyOkMnyLkMgBYYU3yr+ubsuxG/rS5HofcnMGMJQcj+gw78Zg8kATIc6rv0Q02ao3R3pD1rMEyL1VqSG3oZKFxt7hA/EYdxZrO4dU2+vfr1z32UQfxGjSu3SthvThVStz9hNBX1EzWJPAMsS4bV6UYVPmYzHe0TFDEErHn+ZpWM5ai2r4bYVwCN2PJyYWDmgkLmaShzl2keCCBAOIaLAUUZbODNwV4oayJxcWIYN8owSXgCU++Ls2z5DGA8N9K5d70f7vtZLPEGOcYt2g1BAR8g428rkftqcd6+AC9Vg== X-MS-TrafficTypeDiagnostic: SN6PR12MB2637: X-Microsoft-Exchange-Diagnostics: 1; SN6PR12MB2637; 31:rFzT/5QxaVNeogX8WAN9rQwbwii9mxHgDj56iegbhUvlg0hRsNYB2fsJhoZ8QVjm9fAwL5SyoIcU+AOwpfy6RKsa9P0UGlWF0fWUIQRcr5BsFGYjKFYKNRTkCRtejCKstL+nEaUwp/U3tUmnE2Ampp4duXv0+nZGEzpialaXrGU0/oMx6w6W0gTzZgTdXpwCgb/0CHMVbSc4tHLJXeP0S//tqvo4S4TDw7iGeJbSHfY=; 20:EBnrB9kvAyFh/LM88VHYExE5kjg4ke/LePpwYQA3uHYmUC9NTH5rrGVyaSECELxqx1qJbk0n0C6HVwKdhgOu3h/Knf11cxSic2KKDBUgUsJsDa6NecTxLAgZDKTSgoR66l6wVfXJ6/qJNWDzmJuxSMkmh9STQgnX7B1mLuWBcWRoScDIrrtDm+ckteSwAdX6zYLhTn7r36Nl8Yewa6dewbOyGoBbEpcZJ6awO94VdaQExLxj7mjZZ7mmeANCAkI/6j1pXeRqEajpFwhORrssmp3C5BIQi8C3jFPeE60Dfthpi079eDqaDEy6hvwwdfRDuSfgZU1V7EQdNvBIlcBkial1s/K/gpZyoMVP8FJlPobQk6lrGkT514gghmvBegaJRai/0u6cqpl7+SN9ZHyb6qn2DXSqlF19nS2YUU0LHp8DIjqvWrjFzhUakOsLpMtksoR+gvZoX+x4ij3WGvkQ9wQ4AfLAxVtrU2KJ43agNdLmsJXlwv2+IH8Trxd/ILTq X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3231455)(999002)(944501520)(52105112)(93006095)(93003095)(3002001)(10201501046)(6055026)(148016)(149066)(150057)(6041310)(20161123564045)(20161123560045)(20161123558120)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(201708071742011)(7699051)(76991095); SRVR:SN6PR12MB2637; BCL:0; PCL:0; RULEID:; SRVR:SN6PR12MB2637; X-Microsoft-Exchange-Diagnostics: 1; SN6PR12MB2637; 4:mGzALACi8YfeaKohoi9EV7yt5rBX62Pp+i7xmSrxrwMOoU4aXJKxHJpgoC+uChkHpkU0Mw+AkbyESvBBLsS4mMWJaH3Ctk+HdVQ5xvYg18fHgzwnNWsOavZoTYEjDotwpUwNVBbsVaDMQ6TVypZvEbjlQUSI8sMLcS4wNBvilvMn3j9/L+4Tw2EKQi2g8FjC1FfuL652GKn8paPOdutqSkmbx/JpKVCaqlP8PQG30nxt4TdvpTF8e3UtTtgQZkSsj3DTt+yixxphzSF5bryd1w== X-Forefront-PRVS: 087894CD3C X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; SN6PR12MB2637; 23:VKMvKnEDmw+RrmLP3GMGiSAivEW7UVj8nr+KO/Nhl?= I9VWJ/ASV4dh/dCYJ860oM9uwmq0JzNYF3uAp2VktJ63nmOlFkAq7Vvacg2ai3xfWOY4m8FoSmrTfck4NnWyxdE6DDNjZS5PonqI84CJH32P2Myu5ITJh4tUmTVSJLdfN8WU1JANsc5cgdjykInX6hw1XcNj3qncpIXH4ME/nueJhR7i5tlEiB6kP6A7sazMDF2HNXayJQTxoc5+dCbqcec8mb5rLDDRgnKlQ1ERStrSR9oFr3j9GowKT+PndW+5dNnrUj77LB5r/LGSIdXjhH4NHgRotT/ttMunn0Qz9RM2uute/4LfVpiMPX/1XhCuIOEBxiYQiEtbP9Yi16RIpgDeyBNfx/IwXOj9l3fL7fnCnrLEFGzSnervLiFumxT+sAw8qN3Iky8oXZ0m9HaNUO+UKSX2Qsc0nVZR5dgc7Ds6Dcf5c352ipqyUOpHy2Vbv8brvtLRwcIJApuPKes/Kg/7HYiESD9Eni8v7jnL/ahlEh1Eurihx7fQ8Go36+MGkZTaM/e50TfSGTN0PsMJjZdrDPTCiJIR954BQKRUnLWiyzfklvt1TlNRaswzl1f8U1x3ruxumfWg0P886vNYq3NDf1qn4wYyU4WQf3QdHlqG+hQXIwW5Vn3KFHBH7s9fRfeKFQ1EqQz8oX49K1TBhvXA/z1lzmCC/c7DxXV9TNrl2aGBXgNeG5r6n2V2jacGxxQVR05Qp1dU8qeiLUmcK/gmrqGjomLw/n6QvzPwvem95vzZZ+CK8xZDg5QBrmB0FgeN3xYZ1vCAKtW/nqhcR47mFvh8KKfUJcNPN5qd7I6IjjVB2I9v2U4NmGxaJ3E89FDpbufl878YhaBWV7sPJxuJOIESLsO3XxssAYmHuUsEARIAYvyQMJunV1XVL/UOBaOQLs4QIsd0t/Ux2SwV/d3dd3JSthXZas8oyLM2Lqv/S0cLrKPFGeKl0/FAq17QWoTxdIgCW/ePzs3w23azd2TtC3vRRTlMINQCskvU+UsjO0o7IjmC5ZrMViXR3Mx7JQhXnCcw7k1hY8vPgxQDN+TgdJFJn52Vgdeqw0vjSR8vOQrIaRJrqffCY3kdleKPEeo/U1b5YciNELXKEyCQ3WZmmw8LZ6VaA5keNEpwWGsOGVVreqIB12uy+W6eJMscZEpUKeYnF368xzEnKJOJCXKFF+/AO+MqeWVjcKJ+i/eag== X-Microsoft-Antispam-Message-Info: a9BdCS5y7b5MfPTziCSf9kdMa1hO5ooSf+22Bh1dRrO0MjnEK0pHMwV4tNaThpMisqJ1w6cMykkVSAF0B5GISajGycJiVyyez9gEjNrYHCIccMpSqUvfCcon/xksLwx9bAz8qm03on0O0m42bZPCsjlQMjxw5C14RxiD14F8ivs7CGitoApn6zChHlt3JcuqRlqojruBF0e4mhWRbhjzho/3s5zSeYeW9LwTo5lVlqXHme6cl3sgbmy8+m8gCP6B5I51U/i4YXqPJ8byaK9pWXFNtX07PBmoi7Mt19bmHT/gWNfBlzg9NOmBU4wfDHGuPkYqyt9medIjRrsq7Swteuup9KtB7/5T93YETy/KEnM= X-Microsoft-Exchange-Diagnostics: 1; SN6PR12MB2637; 6:7iRB6IdBesRaqnQxnS9C0yzskLBCr8lNPHbHUTur0e50x/jam8VS39eCA6+xqYL4sEyuWXPA34DOXu8xnPvVFcjBpyUtqFbLhnKbiddtBoUIZ2LsLTczfSzKK8RVIPrN77qBZXe9hWQvf5knGSTv014zUplcMr4ush/pmTNobRowWIJ3ljbRUTdEqTe8Vw0OuUIfMhKY64JpecOaqQhxxUD90m9HCdEFQoaA3EotBmtR7UwLxVFzRl9TX1r/6K61+RVskchz08Y4MnECBMvjWfSYOlul49IGcjLucToF5QtyMObmXmbOKn1wgWB0M1HX80pMWyxdqj1iJIbwQzRGh1Dgt36zEpVRTY5QFdNpsHivLqxJBOu6pdw/kKOcfd/QrLMlGZmnNiGlnipOsvWxXqWvaZFvJQr0j8ljx5huRBAHS2SvCVIIbAZEWjYzJMH0HzB1rsN02IlvyfttQfFktw==; 5:i08VNAfNYWz7s+FxViRGeSVAwe4XJwgdSk4S1ijtsN3eFDykTPCOS+bTjrNsiSqnNYZYBXY+cybHduQJad5ahAiQxycvCR75sofbZCqU5kRiiBGVXsb0vdVTerbMuHJALnHL/k7+cCfcBYBInbaCB8nS9YviQMOYxo1GnrLfmK8=; 7:6Csi//ANaPSjPU7JkUCIl/tkGKJO5y0jDH/3RWwuzgvPWHFJE7KGvKltglexchz1P6Oahwv0Sw3WyhIoZSNCl9Ex3BRFkvrMudckwG94j8K70NyRAhQbaMPtE6M0Ys41gK01deIwv6xykyrmp8guyQ== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; SN6PR12MB2637; 20:zLZrIuyEntizxnoeYMqRM2MftKpZOwQXowMhMuQAZxxa8dA50HI1rva9vfEIkiYu7uGLd37K2qsvehc0qrNNF3kwt8TL8pqdFKxQ8dZH3rVcnbz80KIUucXEv7myw/Xt2nGuE9/gEdwisRmICm2+wUYLFfG+DZEP/c+oOppNTKAqHagIKDLAtrLS1hQFClyrm/RgPX6TpEHoYCmV4cIhLtu1b7FwmfUFAC8bMhKbReeh2mtcncUfP3xc7w7/Im03 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Dec 2018 17:41:30.2925 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 082c1359-4492-4cb4-ab3e-08d65ba20e7f X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR12MB2637 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Monk.Liu@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Decauple sched threads stop and start and ring mirror list handling from the policy of what to do about the guilty jobs. When stoppping the sched thread and detaching sched fences from non signaled HW fenes wait for all signaled HW fences to complete before rerunning the jobs. Suggested-by: Christian Koenig Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 17 +++--- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 8 +-- drivers/gpu/drm/scheduler/sched_main.c | 86 +++++++++++++++++++----------- drivers/gpu/drm/v3d/v3d_sched.c | 11 ++-- include/drm/gpu_scheduler.h | 10 ++-- 5 files changed, 83 insertions(+), 49 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index ef36cc5..42111d5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -3292,17 +3292,16 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, /* block all schedulers and reset given job's ring */ for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_ring *ring = adev->rings[i]; + bool park_only = job && job->base.sched != &ring->sched; if (!ring || !ring->sched.thread) continue; - kthread_park(ring->sched.thread); + drm_sched_stop(&ring->sched, job ? &job->base : NULL, park_only); - if (job && job->base.sched != &ring->sched) + if (park_only) continue; - drm_sched_hw_job_reset(&ring->sched, job ? &job->base : NULL); - /* after all hw jobs are reset, hw fence is meaningless, so force_completion */ amdgpu_fence_driver_force_completion(ring); } @@ -3445,6 +3444,7 @@ static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev, struct amdgpu_job *job) { int i; + bool unpark_only; for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_ring *ring = adev->rings[i]; @@ -3456,10 +3456,13 @@ static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev, * or all rings (in the case @job is NULL) * after above amdgpu_reset accomplished */ - if ((!job || job->base.sched == &ring->sched) && !adev->asic_reset_res) - drm_sched_job_recovery(&ring->sched); + unpark_only = (job && job->base.sched != &ring->sched) || + adev->asic_reset_res; + + if (!unpark_only) + drm_sched_resubmit_jobs(&ring->sched); - kthread_unpark(ring->sched.thread); + drm_sched_start(&ring->sched, unpark_only); } if (!amdgpu_device_has_dc_support(adev)) { diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 49a6763..fab3b51 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -109,16 +109,16 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job) } /* block scheduler */ - kthread_park(gpu->sched.thread); - drm_sched_hw_job_reset(&gpu->sched, sched_job); + drm_sched_stop(&gpu->sched, sched_job, false); /* get the GPU back into the init state */ etnaviv_core_dump(gpu); etnaviv_gpu_recover_hang(gpu); + drm_sched_resubmit_jobs(&gpu->sched); + /* restart scheduler after GPU is usable again */ - drm_sched_job_recovery(&gpu->sched); - kthread_unpark(gpu->sched.thread); + drm_sched_start(&gpu->sched); } static void etnaviv_sched_free_job(struct drm_sched_job *sched_job) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index dbb6906..8fb7f86 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -60,8 +60,6 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb); -static void drm_sched_expel_job_unlocked(struct drm_sched_job *s_job); - /** * drm_sched_rq_init - initialize a given run queue struct * @@ -342,13 +340,21 @@ static void drm_sched_job_timedout(struct work_struct *work) * @bad: bad scheduler job * */ -void drm_sched_hw_job_reset(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) +void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad, + bool park_only) { struct drm_sched_job *s_job; struct drm_sched_entity *entity, *tmp; unsigned long flags; + struct list_head wait_list; int i; + kthread_park(sched->thread); + if (park_only) + return; + + INIT_LIST_HEAD(&wait_list); + spin_lock_irqsave(&sched->job_list_lock, flags); list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) { if (s_job->s_fence->parent && @@ -358,9 +364,24 @@ void drm_sched_hw_job_reset(struct drm_gpu_scheduler *sched, struct drm_sched_jo s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); } + else { + /* TODO Is it get/put neccessey here ? */ + dma_fence_get(&s_job->s_fence->finished); + list_add(&s_job->finish_node, &wait_list); + } } spin_unlock_irqrestore(&sched->job_list_lock, flags); + /* + * Verify all the signaled jobs in mirror list are removed from the ring + * We rely on the fact that any finish_work in progress will wait for this + * handler to complete before releasing all of the jobs we iterate. + */ + list_for_each_entry(s_job, &wait_list, finish_node) { + dma_fence_wait(&s_job->s_fence->finished, false); + dma_fence_put(&s_job->s_fence->finished); + } + if (bad && bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) { atomic_inc(&bad->karma); /* don't increase @bad's karma if it's from KERNEL RQ, @@ -385,7 +406,7 @@ void drm_sched_hw_job_reset(struct drm_gpu_scheduler *sched, struct drm_sched_jo } } } -EXPORT_SYMBOL(drm_sched_hw_job_reset); +EXPORT_SYMBOL(drm_sched_stop); /** * drm_sched_job_recovery - recover jobs after a reset @@ -393,14 +414,17 @@ EXPORT_SYMBOL(drm_sched_hw_job_reset); * @sched: scheduler instance * */ -void drm_sched_job_recovery(struct drm_gpu_scheduler *sched) +void drm_sched_start(struct drm_gpu_scheduler *sched, bool unpark_only) { struct drm_sched_job *s_job, *tmp; bool found_guilty = false; unsigned long flags; int r; - spin_lock_irqsave(&sched->job_list_lock, flags); + if (unpark_only) + goto unpark; + + spin_lock_irqsave(&sched->job_list_lock, flags); list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { struct drm_sched_fence *s_fence = s_job->s_fence; struct dma_fence *fence; @@ -414,12 +438,9 @@ void drm_sched_job_recovery(struct drm_gpu_scheduler *sched) if (found_guilty && s_job->s_fence->scheduled.context == guilty_context) dma_fence_set_error(&s_fence->finished, -ECANCELED); - spin_unlock_irqrestore(&sched->job_list_lock, flags); - fence = sched->ops->run_job(s_job); - atomic_inc(&sched->hw_rq_count); + fence = s_job->s_fence->parent; if (fence) { - s_fence->parent = dma_fence_get(fence); r = dma_fence_add_callback(fence, &s_fence->cb, drm_sched_process_job); if (r == -ENOENT) @@ -427,18 +448,35 @@ void drm_sched_job_recovery(struct drm_gpu_scheduler *sched) else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); - dma_fence_put(fence); - } else { - if (s_fence->finished.error < 0) - drm_sched_expel_job_unlocked(s_job); + } else drm_sched_process_job(NULL, &s_fence->cb); - } - spin_lock_irqsave(&sched->job_list_lock, flags); } + drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); + +unpark: + kthread_unpark(sched->thread); } -EXPORT_SYMBOL(drm_sched_job_recovery); +EXPORT_SYMBOL(drm_sched_start); + +/** + * drm_sched_resubmit_jobs - helper to relunch job from mirror ring list + * + * @sched: scheduler instance + * + */ +void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) +{ + struct drm_sched_job *s_job, *tmp; + + /*TODO DO we need spinlock here ? */ + list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { + s_job->s_fence->parent = sched->ops->run_job(s_job); + atomic_inc(&sched->hw_rq_count); + } +} +EXPORT_SYMBOL(drm_sched_resubmit_jobs); /** * drm_sched_job_init - init a scheduler job @@ -634,26 +672,14 @@ static int drm_sched_main(void *param) DRM_ERROR("fence add callback failed (%d)\n", r); dma_fence_put(fence); - } else { - if (s_fence->finished.error < 0) - drm_sched_expel_job_unlocked(sched_job); + } else drm_sched_process_job(NULL, &s_fence->cb); - } wake_up(&sched->job_scheduled); } return 0; } -static void drm_sched_expel_job_unlocked(struct drm_sched_job *s_job) -{ - struct drm_gpu_scheduler *sched = s_job->sched; - - spin_lock(&sched->job_list_lock); - list_del_init(&s_job->node); - spin_unlock(&sched->job_list_lock); -} - /** * drm_sched_init - Init a gpu scheduler instance * diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index 445b2ef..f99346a 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -178,18 +178,19 @@ v3d_job_timedout(struct drm_sched_job *sched_job) for (q = 0; q < V3D_MAX_QUEUES; q++) { struct drm_gpu_scheduler *sched = &v3d->queue[q].sched; - kthread_park(sched->thread); - drm_sched_hw_job_reset(sched, (sched_job->sched == sched ? - sched_job : NULL)); + drm_sched_stop(sched, (sched_job->sched == sched ? + sched_job : NULL), false); } /* get the GPU back into the init state */ v3d_reset(v3d); + for (q = 0; q < V3D_MAX_QUEUES; q++) + drm_sched_resubmit_jobs(sched_job->sched); + /* Unblock schedulers and restart their jobs. */ for (q = 0; q < V3D_MAX_QUEUES; q++) { - drm_sched_job_recovery(&v3d->queue[q].sched); - kthread_unpark(v3d->queue[q].sched.thread); + drm_sched_start(&v3d->queue[q].sched, false); } mutex_unlock(&v3d->reset_lock); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 47e1979..c94b592 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -175,6 +175,7 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * finished to remove the job from the * @drm_gpu_scheduler.ring_mirror_list. * @node: used to append this struct to the @drm_gpu_scheduler.ring_mirror_list. + * @finish_node: used in a list to wait on before resetting the scheduler * @id: a unique id assigned to each job scheduled on the scheduler. * @karma: increment on every hang caused by this job. If this exceeds the hang * limit of the scheduler then the job is marked guilty and will not @@ -193,6 +194,7 @@ struct drm_sched_job { struct dma_fence_cb finish_cb; struct work_struct finish_work; struct list_head node; + struct list_head finish_node; uint64_t id; atomic_t karma; enum drm_sched_priority s_priority; @@ -298,9 +300,11 @@ int drm_sched_job_init(struct drm_sched_job *job, void *owner); void drm_sched_job_cleanup(struct drm_sched_job *job); void drm_sched_wakeup(struct drm_gpu_scheduler *sched); -void drm_sched_hw_job_reset(struct drm_gpu_scheduler *sched, - struct drm_sched_job *job); -void drm_sched_job_recovery(struct drm_gpu_scheduler *sched); +void drm_sched_stop(struct drm_gpu_scheduler *sched, + struct drm_sched_job *job, + bool park_only); +void drm_sched_start(struct drm_gpu_scheduler *sched, bool unpark_only); +void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched); bool drm_sched_dependency_optimized(struct dma_fence* fence, struct drm_sched_entity *entity); void drm_sched_fault(struct drm_gpu_scheduler *sched); From patchwork Thu Dec 6 17:41:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10716527 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5EAB614E2 for ; Thu, 6 Dec 2018 17:41:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 455F72EF77 for ; Thu, 6 Dec 2018 17:41:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 37C122EF7B; Thu, 6 Dec 2018 17:41:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A32582EF77 for ; Thu, 6 Dec 2018 17:41:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2D7366E627; Thu, 6 Dec 2018 17:41:41 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-eopbgr820055.outbound.protection.outlook.com [40.107.82.55]) by gabe.freedesktop.org (Postfix) with ESMTPS id D7BCB6E628; Thu, 6 Dec 2018 17:41:39 +0000 (UTC) Received: from BN6PR1201CA0020.namprd12.prod.outlook.com (2603:10b6:405:4c::30) by DM3PR12MB0841.namprd12.prod.outlook.com (2a01:111:e400:5985::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1404.19; Thu, 6 Dec 2018 17:41:38 +0000 Received: from BY2NAM03FT042.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e4a::203) by BN6PR1201CA0020.outlook.office365.com (2603:10b6:405:4c::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.17 via Frontend Transport; Thu, 6 Dec 2018 17:41:37 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by BY2NAM03FT042.mail.protection.outlook.com (10.152.85.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.17 via Frontend Transport; Thu, 6 Dec 2018 17:41:37 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.389.1; Thu, 6 Dec 2018 11:41:35 -0600 From: Andrey Grodzovsky To: , , , , Subject: [PATCH 2/2] drm/sched: Rework HW fence processing. Date: Thu, 6 Dec 2018 12:41:14 -0500 Message-ID: <1544118074-24910-2-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1544118074-24910-1-git-send-email-andrey.grodzovsky@amd.com> References: <1544118074-24910-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(376002)(396003)(136003)(39860400002)(2980300002)(428003)(189003)(199004)(68736007)(104016004)(97736004)(105586002)(53416004)(478600001)(48376002)(50466002)(72206003)(54906003)(16586007)(8676002)(81156014)(50226002)(81166006)(36756003)(8936002)(2906002)(39060400002)(4326008)(53936002)(446003)(51416003)(26005)(2201001)(186003)(106466001)(2616005)(126002)(476003)(76176011)(11346002)(7696005)(77096007)(86362001)(305945005)(316002)(110136005)(5660300001)(14444005)(336012)(426003)(6666004)(47776003)(486006)(356004)(44832011)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:DM3PR12MB0841; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; MX:1; A:1; X-Microsoft-Exchange-Diagnostics: 1; BY2NAM03FT042; 1:JAmqcbnk2eHy8xUlJOYQZCoE+91rgCViRrXPckrkA9MRDPPNpOfawzJc77npTXYV+rj90nlXVQ/Y1ilpUSee5l813OYKEWk9PvFNicvgxeZG4bxgL1a15IgNmntd1xb/ X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f554fde3-8b39-4bb4-6553-08d65ba2129a X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390098)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(2017052603328)(7153060); SRVR:DM3PR12MB0841; X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 3:pOyNju4R321WfgMDX1lBf355coUAAxP7m0i1mBbBQlCqKpWGXTkkxArKWXYZTNMUCiJDA9SUY95eTgp+NI3DrMfnbmxdA76ZV0y8CQnPj3PLPyEfxKGXoynLe/JQTfvHpPpzJs2STGve5rAx8YNlRymS7DV+yQgcFF4CCrH8EkMT3cPHqh2trtIESAym6boEaVC3yKRW68jBKbBnx4p1HQrt5K44HXyf3XnXePjzW/c8bgghMSsqeAuU4xhqRjLj1jrX1gm+ovNEG61WTUTyH8SNzNnK7v2fie0JDd40/OQG0XQ0GgEsVEDQ/JRB+wKWpgYpUxpBy4m06n94zdKl8qbXDCrSyACiOG+TiAWhojI=; 25:5drmgmpBHr+hllrvcJPZJmD+DJsCL0rWkEgyuseITzDj8we+vkp8ftgXWT+jLgZsGIiUgLI155vSioH0GsChkjDf5aaGBKZnnZj4dwF7lXx7JvzgwIfrpDzlDBBkE/G0h4t1VOBSVBZ1ZGoBfORn277255ZLt1Ufz0X37FchuJbNL/psHmCfeUlf4+RmWM+79zJWbxa4mqC4ExXMthDjN6BKvAzspW7afQRLLDIhK7ap2WnRCfmgQXNrx6Ywh8f3zYG6+KZ2s5ZfcCACInRgTY8rIyeNMvwves/pHDWHwWe3jyngNUFUHwsizHJTVB0n8/I2U7O4dT+o0Z8t9/c36A== X-MS-TrafficTypeDiagnostic: DM3PR12MB0841: X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 31:q48ZjuCja+Hha5gnWqSezfxpxDBGO62ueFwRLEJIMF5GmkF2/kRx42ajb032uhTlP3FG7QGZsaNNkA2yy/84GLdLyqS/nsTLkW4I07Sx1Gf6nSsrQE4aN3gO5grY6/k/1ouKIpjR+EnK1zg6CG2osRXzOlSyKee++DEfaJp5HLt0gkJb8s8kgN1GW9rYZEtPC7VJXQcLkuFsqJ+pI8LRS9rpsKpDd02GMViziNYJkOY=; 20:mam3rSzMF+dW/LrHjtfak7bIbpQjT1xocLi1Oz9wetcvaIEXkDGxvxm1oFEjcTE0MjTHr+ZSteh+tgr9yY2mnrEFmNL919Rxd8mNNjK2qddxBTr9pHU29dxeMdXKpSzVJTTCY+phq64slmlxRhpVMXtR6jqtbx2bsU2TNzOiXEQ/AZoeTlUWNPh3NCp1G+FoOWrJojaCxe7H1phG+LGXheMuqxidENzACFYEnLdgFqC6W1l04KvzyZMidkMtbMG5AmiEEnKIKBnM0z1dgWIkA9KqNIXp+pPEaevcL1RLLE/DC8SK8v+d+dCqXvMLBQtYAwsPSwsUYCXDJwLXd60JennyrmsjfLg5yToeHYSgqCBugU/MECIk6NpZDAM4vr60loqIJh1RmXxrJG55eIKcq5BwgXHAhSl3z2tjQn0cQbxmCCagYaCOxasmjtSLI0ocgFvNpK1Un/+DejOvxdie5McaKS1/0CQBBkzOSuyV+EyBC6kouVkAuG/737hLes7F X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3002001)(93006095)(93003095)(3231455)(999002)(944501520)(52105112)(10201501046)(6055026)(148016)(149066)(150057)(6041310)(20161123558120)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(201708071742011)(7699051)(76991095); SRVR:DM3PR12MB0841; BCL:0; PCL:0; RULEID:; SRVR:DM3PR12MB0841; X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 4:myD3E4/E/yVbW8ZTtqJtzjbW5IWLgTKJrO7k9tP6ECjHHljym8PKoZvSVNn7D6uxx+wGjBNHDh/Y2o/OSD9a4IFKIyNj4c8o/k2BsGnv8+fzYfdxyapTz+4R4v3AbCZcqgyzkjQULP7+azGfwhgSn2sd37ZawAHiIg7XfWyOXObAR+Yk6+5epc/+RX+Ff3/Tybw4cZdkxvyqVpRNcJEfOhiX2n/BvYEVsu21XQBOcNuDHAXAf0wBxZeNY6cRvlmgnaZg3JYdKJPxf/Uz8b3AfQ== X-Forefront-PRVS: 087894CD3C X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM3PR12MB0841; 23:UqVqlITWoTQFkSA3bsMYaX05HdL0R1a6vTJ+GDr5U?= mDhGVmQ2xNZ0noyNbm7wHm/E+D8HW0bmUg4qqhPQ2sg2gjXZ7uzLfgVGumRnv3KKIYgXO4AR1VV1E+38fHLM0hex3lWZhN1i5U8RE9IHz4qDkdqWSw/nyqs8Ppcm2fT8PHqsiZ1K32+vv/Oh8VCSMt1lU1OY58MgiDVBr3QZRsIQJVuU4LFqMEHHUOCk5+iMoA11E+C/WTCVQtQrnFBKyeixI4dHZTwyAnDoiv87Jp2BshKZucnfjTWe/lkz2xfUbhJRWK532P7Uf5wTmZnrTI4Q14Wf5vK2wjw8iK+2Q3ubgPcOFVha8+ZmA+fKSIxV2k/7Mo76z5+6XmNLtiEefffjGoZ8KG6VGBhvrt03eZNh0gimvQh5/8/YShk4/EN4Oz36NEuhs8inQH5NrnEIuYhAjF5MGYFIGKxtYD7LY+mpwsQqzv/bVkengrVYHzu3LlpILBRUEID7F6LTaXeM9CHOG+bM/tRpGVndbKWjUucDxMgae1J924707EG32pBIxqRkBUYgGceQBP/tjWiiMr35SIdq51cQajxVCTPnEiqarSzqF9Pvzt56ro3eFfBpqkwd3PUohfDO77lJ2B8asz3w3VEzFNXFPNmOMcbdckP9VqbEg2oLd8qt95SFrKfGWwnQas464J6FQdzqPqrRA4HV01XDU8fkQWvgIHKDMttJnO5+nMX4zNLYv7jo0EZuFDGs5ku7xlULDFLJ/bSUFWosVz8eadz2VRdKvira5dYpfmh+GbFSPGohmijUlWlY3jQaIEKaASI+2BvRQ7KnEza8rt9myFW1Pvi/lOiJyQncmYVf7p0lGRrnH2dret/UKuEEC7crRe4GT4cV33KKiSQQainJ5Oxmiww+Z2zotaaffFxTvJB+7u+orhHULizI8CjgTSThFm+475KEFooJkgU8XRRBkEgYyR1PQU4ALEsKdxkRXsIF4gUi/pGN2CBkis40qSDBurx/FgCdd1SADmDLPd8oEQhblPvREeKpWrYJg22r6T8RlQovHPv9q/Ibtik8CKhW0oNmMuAmOJKXYT0daPm3vRR2tHg4W5e3J9V6FnAAESef9QfZWOI985dTZ51d2Rqy/yYMzvqxXdezfUsGVgHmD6oy0i32taMyBO03aCZfPy/FFAYXH/0Ik6/KKIDgS9pOguUC0kHcDSfEpX0uUeoHpDXmAnVmQt7Z22NFrFWwfMZp1ad+ZvX4P011smXVbsOyk6LiWh1alCa5QyA X-Microsoft-Antispam-Message-Info: Q78jt7/zfxyUy/jx3zddGph7jZzKmZoGKNi7Rn7oPNR+y5WyINNJWQ6pFiWg0q+0AKzyE3mweLxi+AH5anj0CLXRW84l+fi/5Jy5bgpup80KjN4BK0oFkFxTTuRhM6QuutrBaY0KL8RckC06h5bj6PMn1HIWHe0W92iQX2uFMdPoG8tI7/ZSeD660mVmf73OUxoZ/pL99YcxXofkq9pMQi0g0mjhmdu17A2Y3+RNak5gdCg341rg9ygQuH3rz9q0Slh203E5CthgNnswYTAxace23KFbPNHVNinlpFIdJ/1BhZxRDFBvMF9JFlDD+4/2/K2t8gGnbobsjgBphRBI1bIlke2/81khBKAHXdhPw7U= X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 6:yDGg0zlD2W9htPVet4SjbLZtrxB1kP7WO+2iLbLHj0bOr0OFw7HMsqOibPX44C038rZQHimTXaXe4I/cQwmw9XRVwNLRa75xT38PYKGjy6rLTJPqm4Edk93OrPpcOBxDi3rob8A1u8TgR8jDGzwQ7xv0ddufdR7OxCH3ByOL2w65Ej0LYnAKoh6smXR01+IuUHAk2qhbXP5TSXMUhrEuRuETMx9M4SysDnXg6IL8TIVkQoCbVoVU0BtNFXJqxWnc5NqaQAOojHBlaVjIn9c7nyziER2cfJa/pq6N4fOpCEJbvUCVPevnctbfr/44vx9B46aSCdUb3Vi4gjoGqQFNB8acliOEZ6ijRc+J+3bxkBNpClw9INX5iqHqVXcGzsBwidm3IvCLBXuC0vVR48xdrYXSNzgr6QmT9fN3m8w9sAxMrIn0VDkhCEg2RvjWqXGKysOF1/hax3kP4LLEXjinLg==; 5:1fW2NnVIOHyZc3NJ7KSlUTnENWESFn3mAx4E3Ue0hy1zH8ee8JtbOyelj0tQ/tXsK5MYGKocCTFmeYy2g/98rF509LCZEindbx1GcZH+GWa2UjQcf0xaw890vylTxJ6rJmIMOjOW3JhH0fyBPr2n8uNfJYh4LADwGfPZd2ZZK8A=; 7:PuAZT3pTD+my0iDLpim46teeJ5Ps6sfIBliOrez8Ue/4vWzS9MoL5gsDIey7FoMRBD85Cdbcrn4CUoPgvxBMDN3hL3QaLQsebbvMPWHrKV2wj1/qkjyG/Qmrmr03/TZzeOuj4bbb7mRCYSAEj+FvIQ== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; DM3PR12MB0841; 20:DQujzLhwi2Ru7xHCCZmxIRLmd+i2iF3MG2HWQ7uHYg4GdqRbH8PpIiXo0ENFmfbDy/cTgd+/+vDDWU/2xYjFEkgwyd6zYWMuEwTUL3lzPWzNloVfeq7VEXpEt3ruq1lxaCIKrhTZqvu1C1hVB+9f3myRK8I8OI6ei2a104fztehLEcNaeCO8sg22chkKR53zRoVXVwURTu85oNsSBMnMI0y48flsXqJwGB8+s2yQRBI5mt55YF5YTXfQxFMWh8qs X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Dec 2018 17:41:37.2566 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f554fde3-8b39-4bb4-6553-08d65ba2129a X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM3PR12MB0841 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Monk.Liu@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Expedite job deletion from ring mirror list to the HW fence signal callback instead from finish_work, together with waiting for all such fences to signal in drm_sched_stop we garantee that already signaled job will not be processed twice. Remove the sched finish fence callback and just submit finish_work directly from the HW fence callback. Suggested-by: Christian Koenig Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/sched_fence.c | 4 +++- drivers/gpu/drm/scheduler/sched_main.c | 39 ++++++++++++++++----------------- include/drm/gpu_scheduler.h | 10 +++++++-- 3 files changed, 30 insertions(+), 23 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index d8d2dff..e62c239 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -151,7 +151,8 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) EXPORT_SYMBOL(to_drm_sched_fence); struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, - void *owner) + void *owner, + struct drm_sched_job *s_job) { struct drm_sched_fence *fence = NULL; unsigned seq; @@ -163,6 +164,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, fence->owner = owner; fence->sched = entity->rq->sched; spin_lock_init(&fence->lock); + fence->s_job = s_job; seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 8fb7f86..2860037 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -284,31 +284,17 @@ static void drm_sched_job_finish(struct work_struct *work) cancel_delayed_work_sync(&sched->work_tdr); spin_lock_irqsave(&sched->job_list_lock, flags); - /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); - /* queue TDR for next job */ drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); sched->ops->free_job(s_job); } -static void drm_sched_job_finish_cb(struct dma_fence *f, - struct dma_fence_cb *cb) -{ - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, - finish_cb); - schedule_work(&job->finish_work); -} - static void drm_sched_job_begin(struct drm_sched_job *s_job) { struct drm_gpu_scheduler *sched = s_job->sched; unsigned long flags; - dma_fence_add_callback(&s_job->s_fence->finished, &s_job->finish_cb, - drm_sched_job_finish_cb); - spin_lock_irqsave(&sched->job_list_lock, flags); list_add_tail(&s_job->node, &sched->ring_mirror_list); drm_sched_start_timeout(sched); @@ -418,13 +404,17 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool unpark_only) { struct drm_sched_job *s_job, *tmp; bool found_guilty = false; - unsigned long flags; int r; if (unpark_only) goto unpark; - spin_lock_irqsave(&sched->job_list_lock, flags); + /* + * Locking the list is not required here as the sched thread is parked + * so no new jobs are being pushed in to HW and in drm_sched_stop we + * flushed any in flight jobs who didn't signal yet. Also concurrent + * GPU recovers can't run in parallel. + */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { struct drm_sched_fence *s_fence = s_job->s_fence; struct dma_fence *fence; @@ -453,7 +443,6 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool unpark_only) } drm_sched_start_timeout(sched); - spin_unlock_irqrestore(&sched->job_list_lock, flags); unpark: kthread_unpark(sched->thread); @@ -505,7 +494,7 @@ int drm_sched_job_init(struct drm_sched_job *job, job->sched = sched; job->entity = entity; job->s_priority = entity->rq - sched->sched_rq; - job->s_fence = drm_sched_fence_create(entity, owner); + job->s_fence = drm_sched_fence_create(entity, owner, job); if (!job->s_fence) return -ENOMEM; job->id = atomic64_inc_return(&sched->job_id_count); @@ -593,15 +582,25 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) struct drm_sched_fence *s_fence = container_of(cb, struct drm_sched_fence, cb); struct drm_gpu_scheduler *sched = s_fence->sched; + struct drm_sched_job *s_job = s_fence->s_job; + unsigned long flags; + + cancel_delayed_work(&sched->work_tdr); - dma_fence_get(&s_fence->finished); atomic_dec(&sched->hw_rq_count); atomic_dec(&sched->num_jobs); + + spin_lock_irqsave(&sched->job_list_lock, flags); + /* remove job from ring_mirror_list */ + list_del_init(&s_job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + drm_sched_fence_finished(s_fence); trace_drm_sched_process_job(s_fence); - dma_fence_put(&s_fence->finished); wake_up_interruptible(&sched->wake_up_worker); + + schedule_work(&s_job->finish_work); } /** diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index c94b592..23855c6 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -115,6 +115,8 @@ struct drm_sched_rq { struct drm_sched_entity *current_entity; }; +struct drm_sched_job; + /** * struct drm_sched_fence - fences corresponding to the scheduling of a job. */ @@ -160,6 +162,9 @@ struct drm_sched_fence { * @owner: job owner for debugging */ void *owner; + + /* Back pointer to owning job */ + struct drm_sched_job *s_job; }; struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); @@ -330,8 +335,9 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); -struct drm_sched_fence *drm_sched_fence_create( - struct drm_sched_entity *s_entity, void *owner); +struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *s_entity, + void *owner, + struct drm_sched_job *s_job); void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence);