From patchwork Tue Jul 26 16:40:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: arun.siluvery@linux.intel.com X-Patchwork-Id: 9248443 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DA025607F2 for ; Tue, 26 Jul 2016 16:41:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB48226223 for ; Tue, 26 Jul 2016 16:41:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C0570271FD; Tue, 26 Jul 2016 16:41:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5AA7926223 for ; Tue, 26 Jul 2016 16:41:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 37F976E554; Tue, 26 Jul 2016 16:41:36 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTP id 10BFB6E554 for ; Tue, 26 Jul 2016 16:41:33 +0000 (UTC) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 26 Jul 2016 09:41:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.28,425,1464678000"; d="scan'208"; a="1014222534" Received: from asiluver-linux.isw.intel.com ([10.102.226.117]) by fmsmga001.fm.intel.com with ESMTP; 26 Jul 2016 09:41:13 -0700 From: Arun Siluvery To: intel-gfx@lists.freedesktop.org Date: Tue, 26 Jul 2016 17:40:51 +0100 Message-Id: <1469551257-26803-6-git-send-email-arun.siluvery@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1469551257-26803-1-git-send-email-arun.siluvery@linux.intel.com> References: <1469551257-26803-1-git-send-email-arun.siluvery@linux.intel.com> Subject: [Intel-gfx] [PATCH 05/11] drm/i915/tdr: Identify hung request and drop it X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP The current active request is the one that caused the hang so this is retrieved and removed from elsp queue, otherwise we cannot submit other workloads to be processed by GPU. A consistency check between HW and driver is performed to ensure that we are dropping the correct request. Since this request doesn't get executed anymore, we also need to advance the seqno to mark it as complete. Head pointer is advanced to skip the offending batch so that HW resumes execution other workloads. If HW and SW don't agree then we won't proceed with engine reset, this is treated as an error condition and we fallback to full gpu reset. Cc: Chris Wilson Cc: Mika Kuoppala Signed-off-by: Arun Siluvery --- drivers/gpu/drm/i915/intel_lrc.c | 116 +++++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/intel_lrc.h | 2 + 2 files changed, 118 insertions(+) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index daf1279..8fc5a3b 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1026,6 +1026,122 @@ void intel_lr_context_unpin(struct i915_gem_context *ctx, i915_gem_context_put(ctx); } +static void intel_lr_context_resync(struct i915_gem_context *ctx, + struct intel_engine_cs *engine) +{ + u32 head; + u32 head_addr, tail_addr; + u32 *reg_state; + struct intel_ringbuffer *ringbuf; + struct drm_i915_private *dev_priv = engine->i915; + + ringbuf = ctx->engine[engine->id].ringbuf; + reg_state = ctx->engine[engine->id].lrc_reg_state; + + head = I915_READ_HEAD(engine); + head_addr = head & HEAD_ADDR; + tail_addr = reg_state[CTX_RING_TAIL+1] & TAIL_ADDR; + + /* + * force head it to advance to the next QWORD. In most cases the + * engine head pointer will automatically advance to the next + * instruction as soon as it has read the current instruction, + * without waiting for it to complete. This seems to be the default + * behaviour, however an MBOX wait inserted directly to the VCS/BCS + * engines does not behave in the same way, instead the head + * pointer will still be pointing at the MBOX instruction until it + * completes. + */ + head_addr = roundup(head_addr, 8); + + if (head_addr > tail_addr) + head_addr = tail_addr; + else if (head_addr >= ringbuf->size) + head_addr = 0; + + head &= ~HEAD_ADDR; + head |= (head_addr & HEAD_ADDR); + + /* update head in ctx */ + reg_state[CTX_RING_HEAD+1] = head; + I915_WRITE_HEAD(engine, head); + + ringbuf->head = head; + ringbuf->last_retired_head = -1; + intel_ring_update_space(ringbuf); +} + +/** + * intel_execlists_reset_prepare() - identifies the request that is + * hung and drops it + * + * Head is adjusted to skip the batch that caused the hang + * + * @engine: Engine that is currently hung + * + * Returns: + * 0 - on success + * nonzero errorcode otherwise + */ +int intel_execlists_reset_prepare(struct intel_engine_cs *engine) +{ + struct drm_i915_gem_request *req; + bool continue_with_reset; + + spin_lock_bh(&engine->execlist_lock); + + req = list_first_entry_or_null(&engine->execlist_queue, + struct drm_i915_gem_request, + execlist_link); + + /* + * Only acknowledge the request in the execlist queue if it's actually + * been submitted to hardware, otherwise it cannot cause hang. + */ + if (req && req->ctx && req->elsp_submitted) { + u32 execlist_status; + u32 hw_context; + u32 hw_active; + struct drm_i915_private *dev_priv = engine->i915; + + hw_context = I915_READ(RING_EXECLIST_STATUS_HI(engine)); + execlist_status = I915_READ(RING_EXECLIST_STATUS_LO(engine)); + hw_active = ((execlist_status & EXECLIST_STATUS_ELEMENT0_ACTIVE) || + (execlist_status & EXECLIST_STATUS_ELEMENT1_ACTIVE)); + + continue_with_reset = hw_active && hw_context == req->ctx->hw_id; + if (!continue_with_reset) { + DRM_ERROR("GPU hung when HW is not active !!\n"); + goto unlock; + } + + /* + * GPU is now hung and the request that caused it + * will be dropped so mark it as completed + */ + intel_write_status_page(engine, I915_GEM_HWS_INDEX, req->fence.seqno); + + intel_lr_context_resync(req->ctx, engine); + + /* + * remove the request from the elsp queue so that + * engine can resume execution after reset when new + * requests are submitted + */ + if (!--req->elsp_submitted) { + list_del(&req->execlist_link); + i915_gem_request_put(req); + } + } else { + WARN(1, "GPU hang detected with no active request\n"); + continue_with_reset = false; + } + +unlock: + spin_unlock_bh(&engine->execlist_lock); + return !continue_with_reset; +} + static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req) { int ret, i; diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h index 3828730..1171ea1 100644 --- a/drivers/gpu/drm/i915/intel_lrc.h +++ b/drivers/gpu/drm/i915/intel_lrc.h @@ -31,6 +31,8 @@ /* Execlists regs */ #define RING_ELSP(engine) _MMIO((engine)->mmio_base + 0x230) #define RING_EXECLIST_STATUS_LO(engine) _MMIO((engine)->mmio_base + 0x234) +#define EXECLIST_STATUS_ELEMENT0_ACTIVE (1 << 14) +#define EXECLIST_STATUS_ELEMENT1_ACTIVE (1 << 15) #define RING_EXECLIST_STATUS_HI(engine) _MMIO((engine)->mmio_base + 0x234 + 4) #define RING_CONTEXT_CONTROL(engine) _MMIO((engine)->mmio_base + 0x244) #define CTX_CTRL_INHIBIT_SYN_CTX_SWITCH (1 << 3)