From patchwork Mon Apr 15 19:43:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10901477 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D48B139A for ; Mon, 15 Apr 2019 19:43:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 65D1B28867 for ; Mon, 15 Apr 2019 19:43:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 59A1728942; Mon, 15 Apr 2019 19:43:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BB7AF28867 for ; Mon, 15 Apr 2019 19:43:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7EE66898C2; Mon, 15 Apr 2019 19:43:50 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-eopbgr760044.outbound.protection.outlook.com [40.107.76.44]) by gabe.freedesktop.org (Postfix) with ESMTPS id A38688938B; Mon, 15 Apr 2019 19:43:48 +0000 (UTC) Received: from DM6PR12CA0029.namprd12.prod.outlook.com (2603:10b6:5:1c0::42) by BN8PR12MB3457.namprd12.prod.outlook.com (2603:10b6:408:48::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1792.15; Mon, 15 Apr 2019 19:43:46 +0000 Received: from CO1NAM03FT051.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e48::209) by DM6PR12CA0029.outlook.office365.com (2603:10b6:5:1c0::42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1792.16 via Frontend Transport; Mon, 15 Apr 2019 19:43:46 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by CO1NAM03FT051.mail.protection.outlook.com (10.152.80.242) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1771.16 via Frontend Transport; Mon, 15 Apr 2019 19:43:44 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.389.1; Mon, 15 Apr 2019 14:43:43 -0500 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v3 3/5] drm/amdgpu: Avoid HW reset if guilty job already signaled. Date: Mon, 15 Apr 2019 15:43:21 -0400 Message-ID: <1555357403-30813-3-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555357403-30813-1-git-send-email-andrey.grodzovsky@amd.com> References: <1555357403-30813-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(39860400002)(136003)(376002)(396003)(2970300002)(428003)(189003)(199004)(43544003)(106466001)(5660300002)(104016004)(54906003)(47776003)(105586002)(4326008)(110136005)(76176011)(7696005)(51416003)(16586007)(186003)(77096007)(316002)(336012)(26005)(305945005)(53936002)(36756003)(53416004)(2906002)(14444005)(126002)(356004)(50226002)(8676002)(6666004)(81156014)(8936002)(81166006)(50466002)(11346002)(44832011)(446003)(97736004)(476003)(478600001)(426003)(2616005)(2201001)(72206003)(86362001)(48376002)(486006)(68736007)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:BN8PR12MB3457; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; MX:3; A:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 35d29ad6-c5c0-4b10-5885-08d6c1daac03 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600140)(711020)(4605104)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328); SRVR:BN8PR12MB3457; X-MS-TrafficTypeDiagnostic: BN8PR12MB3457: X-Microsoft-Antispam-PRVS: X-Forefront-PRVS: 000800954F X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: +tHKhk0biw4zENWoVJP47wyl7FF3Hw6uIqZPzvU+ZcFZBavHNqwjMv/JAmwh1h5kUioHixZo4giEJk7kQDRRRveKemaYEe0uDE2CsdAKgSOjA9O6tALxqVkyt9USyz0NGs10W3H0r9jTnOt6EIBltwUajCUsmCuFZK34S2SK3jG6MhL66iOd1jKDvw8y/kqV27JPb0lymdqLcVMfkDnv4jfyivJiYs44n4Is2aT0KBYixzPZS0/j4Ox3tBtVe7YogBhkFGOKUkv7nBy43RXShnD3RDgXJdE8RW/1k7IsVH9MDl78QN8ouSAYL/lxSftw0MdmZW+WQfEndwnagA3KZ+mCxoi7yXTDAs6B+d33HXKCs7WVoDRKrEahbSTnXc0yRtqslYZJTztZMnMPn/MgZ4gX/jqOpR5h2JhSnU/QIBA= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Apr 2019 19:43:44.8774 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 35d29ad6-c5c0-4b10-5885-08d6c1daac03 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3457 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lKCqUINNXxHDf7PBMmKc7cpD6yir2a0xP48GQDGoE4Q=; b=a2D5o/PJUivJu+mfnjhJTUpfUKMm+UDmpg68gsPKu2EWp5mI5AvEK7dDVXdObCu45jel08aVwgMRVux7Ym+VeV3483JZk5UV2VoD0vCH16qlvKl2AjAII+cRmyh5xF5+tcD8BaGJGbIRv3o4c7+QnOhnCqIW9voIOU5Jpticc1k= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nicholas.Kazlauskas@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Also reject TDRs if another one already running. v2: Stop all schedulers across device and entire XGMI hive before force signaling HW fences. Avoid passing job_signaled to helper fnctions to keep all the decision making about skipping HW reset in one place. v3: Fix SW sched. hang after non HW reset. sched.hw_rq_count has to be balanced against it's decrement in drm_sched_stop in non HW reset case. Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 167 +++++++++++++++++++---------- 1 file changed, 110 insertions(+), 57 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 8c41a9f..d88a882 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -3341,25 +3341,16 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, if (!ring || !ring->sched.thread) continue; - drm_sched_stop(&ring->sched, &job->base); - /* after all hw jobs are reset, hw fence is meaningless, so force_completion */ amdgpu_fence_driver_force_completion(ring); } - if(job) { + if(job) drm_sched_increase_karma(&job->base); - /* - * Guilty job did complete and hence needs to be manually removed - * See drm_sched_stop doc. - */ - if (list_empty(&job->base.node)) - job->base.sched->ops->free_job(&job->base); - } - + /* Don't suspend on bare metal if we are not going to HW reset the ASIC */ if (!amdgpu_sriov_vf(adev)) { if (!need_full_reset) @@ -3497,37 +3488,21 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive, return r; } -static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev) +static bool amdgpu_device_lock_adev(struct amdgpu_device *adev, bool trylock) { - int i; + if (trylock) { + if (!mutex_trylock(&adev->lock_reset)) + return false; + } else + mutex_lock(&adev->lock_reset); - for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { - struct amdgpu_ring *ring = adev->rings[i]; - - if (!ring || !ring->sched.thread) - continue; - - if (!adev->asic_reset_res) - drm_sched_resubmit_jobs(&ring->sched); - - drm_sched_start(&ring->sched, !adev->asic_reset_res); - } - - if (!amdgpu_device_has_dc_support(adev)) { - drm_helper_resume_force_mode(adev->ddev); - } - - adev->asic_reset_res = 0; -} - -static void amdgpu_device_lock_adev(struct amdgpu_device *adev) -{ - mutex_lock(&adev->lock_reset); atomic_inc(&adev->gpu_reset_counter); adev->in_gpu_reset = 1; /* Block kfd: SRIOV would do it separately */ if (!amdgpu_sriov_vf(adev)) amdgpu_amdkfd_pre_reset(adev); + + return true; } static void amdgpu_device_unlock_adev(struct amdgpu_device *adev) @@ -3555,40 +3530,42 @@ static void amdgpu_device_unlock_adev(struct amdgpu_device *adev) int amdgpu_device_gpu_recover(struct amdgpu_device *adev, struct amdgpu_job *job) { - int r; + struct list_head device_list, *device_list_handle = NULL; + bool need_full_reset, job_signaled; struct amdgpu_hive_info *hive = NULL; - bool need_full_reset = false; struct amdgpu_device *tmp_adev = NULL; - struct list_head device_list, *device_list_handle = NULL; + int i, r = 0; + need_full_reset = job_signaled = false; INIT_LIST_HEAD(&device_list); dev_info(adev->dev, "GPU reset begin!\n"); + hive = amdgpu_get_xgmi_hive(adev, false); + /* - * In case of XGMI hive disallow concurrent resets to be triggered - * by different nodes. No point also since the one node already executing - * reset will also reset all the other nodes in the hive. + * Here we trylock to avoid chain of resets executing from + * either trigger by jobs on different adevs in XGMI hive or jobs on + * different schedulers for same device while this TO handler is running. + * We always reset all schedulers for device and all devices for XGMI + * hive so that should take care of them too. */ - hive = amdgpu_get_xgmi_hive(adev, 0); - if (hive && adev->gmc.xgmi.num_physical_nodes > 1 && - !mutex_trylock(&hive->reset_lock)) + + if (hive && !mutex_trylock(&hive->reset_lock)) { + DRM_INFO("Bailing on TDR for s_job:%llx, hive: %llx as another already in progress", + job->base.id, hive->hive_id); return 0; + } /* Start with adev pre asic reset first for soft reset check.*/ - amdgpu_device_lock_adev(adev); - r = amdgpu_device_pre_asic_reset(adev, - job, - &need_full_reset); - if (r) { - /*TODO Should we stop ?*/ - DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ", - r, adev->ddev->unique); - adev->asic_reset_res = r; + if (!amdgpu_device_lock_adev(adev, !hive)) { + DRM_INFO("Bailing on TDR for s_job:%llx, as another already in progress", + job->base.id); + return 0; } /* Build list of devices to reset */ - if (need_full_reset && adev->gmc.xgmi.num_physical_nodes > 1) { + if (adev->gmc.xgmi.num_physical_nodes > 1) { if (!hive) { amdgpu_device_unlock_adev(adev); return -ENODEV; @@ -3605,13 +3582,56 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, device_list_handle = &device_list; } + /* block all schedulers and reset given job's ring */ + list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { + struct amdgpu_ring *ring = tmp_adev->rings[i]; + + if (!ring || !ring->sched.thread) + continue; + + drm_sched_stop(&ring->sched, &job->base); + } + } + + + /* + * Must check guilty signal here since after this point all old + * HW fences are force signaled. + * + * job->base holds a reference to parent fence + */ + if (job && job->base.s_fence->parent && + dma_fence_is_signaled(job->base.s_fence->parent)) + job_signaled = true; + + if (!amdgpu_device_ip_need_full_reset(adev)) + device_list_handle = &device_list; + + if (job_signaled) { + dev_info(adev->dev, "Guilty job already signaled, skipping HW reset"); + goto skip_hw_reset; + } + + + /* Guilty job will be freed after this*/ + r = amdgpu_device_pre_asic_reset(adev, + job, + &need_full_reset); + if (r) { + /*TODO Should we stop ?*/ + DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ", + r, adev->ddev->unique); + adev->asic_reset_res = r; + } + retry: /* Rest of adevs pre asic reset from XGMI hive. */ list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { if (tmp_adev == adev) continue; - amdgpu_device_lock_adev(tmp_adev); + amdgpu_device_lock_adev(tmp_adev, false); r = amdgpu_device_pre_asic_reset(tmp_adev, NULL, &need_full_reset); @@ -3635,9 +3655,42 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, goto retry; } +skip_hw_reset: + /* + * Guilty job did complete and hence needs to be manually removed + * See drm_sched_stop doc. + */ + if (job && list_empty(&job->base.node)) + job->base.sched->ops->free_job(&job->base); + /* Post ASIC reset for all devs .*/ list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { - amdgpu_device_post_asic_reset(tmp_adev); + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { + struct drm_sched_job *s_job; + struct amdgpu_ring *ring = tmp_adev->rings[i]; + + if (!ring || !ring->sched.thread) + continue; + + /* No point to resubmit jobs if we didn't HW reset*/ + if (!tmp_adev->asic_reset_res && !job_signaled) + drm_sched_resubmit_jobs(&ring->sched); + + /* hw_rq_count was decremented in drm_sched_sop */ + if (job_signaled) { + list_for_each_entry(s_job, &ring->sched.ring_mirror_list, node) { + atomic_inc(&ring->sched.hw_rq_count); + } + } + + drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res); + } + + if (!amdgpu_device_has_dc_support(tmp_adev) && !job_signaled) { + drm_helper_resume_force_mode(tmp_adev->ddev); + } + + tmp_adev->asic_reset_res = 0; if (r) { /* bad news, how to tell it to userspace ? */ @@ -3650,7 +3703,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, amdgpu_device_unlock_adev(tmp_adev); } - if (hive && adev->gmc.xgmi.num_physical_nodes > 1) + if (hive) mutex_unlock(&hive->reset_lock); if (r)