From patchwork Wed Mar 21 17:26:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: jeff.mcgee@intel.com X-Patchwork-Id: 10299927 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CD432600CC for ; Wed, 21 Mar 2018 17:41:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B92B028B0C for ; Wed, 21 Mar 2018 17:41:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ADA4D296DA; Wed, 21 Mar 2018 17:41:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1D7BE28B0C for ; Wed, 21 Mar 2018 17:41:28 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 406856E985; Wed, 21 Mar 2018 17:41:24 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id BDC2E6E964 for ; Wed, 21 Mar 2018 17:41:19 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Mar 2018 10:41:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,340,1517904000"; d="scan'208";a="25875155" Received: from jeffdesk.fm.intel.com ([10.1.27.184]) by fmsmga007.fm.intel.com with ESMTP; 21 Mar 2018 10:41:18 -0700 From: jeff.mcgee@intel.com To: intel-gfx@lists.freedesktop.org Date: Wed, 21 Mar 2018 10:26:20 -0700 Message-Id: <20180321172625.6415-4-jeff.mcgee@intel.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180321172625.6415-1-jeff.mcgee@intel.com> References: <20180321172625.6415-1-jeff.mcgee@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 3/8] drm/i915: Move engine reset prepare/finish to backends X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ben@bwidawsk.net, kalyan.kondapally@intel.com Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Chris Wilson In preparation to more carefully handling incomplete preemption during reset by execlists, we move the existing code wholesale to the backends under a couple of new reset vfuncs. Signed-off-by: Chris Wilson Cc: MichaƂ Winiarski CC: Michel Thierry Cc: Jeff McGee --- drivers/gpu/drm/i915/i915_gem.c | 42 ++++---------------------- drivers/gpu/drm/i915/intel_lrc.c | 52 +++++++++++++++++++++++++++++++-- drivers/gpu/drm/i915/intel_ringbuffer.c | 20 +++++++++++-- drivers/gpu/drm/i915/intel_ringbuffer.h | 9 ++++-- 4 files changed, 78 insertions(+), 45 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 802df8e1a544..38f7160d99c9 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2917,7 +2917,7 @@ static bool engine_stalled(struct intel_engine_cs *engine) struct i915_request * i915_gem_reset_prepare_engine(struct intel_engine_cs *engine) { - struct i915_request *request = NULL; + struct i915_request *request; /* * During the reset sequence, we must prevent the engine from @@ -2940,40 +2940,7 @@ i915_gem_reset_prepare_engine(struct intel_engine_cs *engine) */ kthread_park(engine->breadcrumbs.signaler); - /* - * Prevent request submission to the hardware until we have - * completed the reset in i915_gem_reset_finish(). If a request - * is completed by one engine, it may then queue a request - * to a second via its execlists->tasklet *just* as we are - * calling engine->init_hw() and also writing the ELSP. - * Turning off the execlists->tasklet until the reset is over - * prevents the race. - * - * Note that this needs to be a single atomic operation on the - * tasklet (flush existing tasks, prevent new tasks) to prevent - * a race between reset and set-wedged. It is not, so we do the best - * we can atm and make sure we don't lock the machine up in the more - * common case of recursively being called from set-wedged from inside - * i915_reset. - */ - if (!atomic_read(&engine->execlists.tasklet.count)) - tasklet_kill(&engine->execlists.tasklet); - tasklet_disable(&engine->execlists.tasklet); - - /* - * We're using worker to queue preemption requests from the tasklet in - * GuC submission mode. - * Even though tasklet was disabled, we may still have a worker queued. - * Let's make sure that all workers scheduled before disabling the - * tasklet are completed before continuing with the reset. - */ - if (engine->i915->guc.preempt_wq) - flush_workqueue(engine->i915->guc.preempt_wq); - - if (engine->irq_seqno_barrier) - engine->irq_seqno_barrier(engine); - - request = i915_gem_find_active_request(engine); + request = engine->reset.prepare(engine); if (request && request->fence.error == -EIO) request = ERR_PTR(-EIO); /* Previous reset failed! */ @@ -3120,7 +3087,7 @@ void i915_gem_reset_engine(struct intel_engine_cs *engine, } /* Setup the CS to resume from the breadcrumb of the hung request */ - engine->reset_hw(engine, request); + engine->reset.reset(engine, request); } void i915_gem_reset(struct drm_i915_private *dev_priv) @@ -3172,7 +3139,8 @@ void i915_gem_reset(struct drm_i915_private *dev_priv) void i915_gem_reset_finish_engine(struct intel_engine_cs *engine) { - tasklet_enable(&engine->execlists.tasklet); + engine->reset.finish(engine); + kthread_unpark(engine->breadcrumbs.signaler); intel_uncore_forcewake_put(engine->i915, FORCEWAKE_ALL); diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 0bfaeb56b8c7..f662a9524233 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1663,6 +1663,44 @@ static int gen9_init_render_ring(struct intel_engine_cs *engine) return init_workarounds_ring(engine); } +static struct i915_request * +execlists_reset_prepare(struct intel_engine_cs *engine) +{ + struct intel_engine_execlists * const execlists = &engine->execlists; + + /* + * Prevent request submission to the hardware until we have + * completed the reset in i915_gem_reset_finish(). If a request + * is completed by one engine, it may then queue a request + * to a second via its execlists->tasklet *just* as we are + * calling engine->init_hw() and also writing the ELSP. + * Turning off the execlists->tasklet until the reset is over + * prevents the race. + * + * Note that this needs to be a single atomic operation on the + * tasklet (flush existing tasks, prevent new tasks) to prevent + * a race between reset and set-wedged. It is not, so we do the best + * we can atm and make sure we don't lock the machine up in the more + * common case of recursively being called from set-wedged from inside + * i915_reset. + */ + if (!atomic_read(&execlists->tasklet.count)) + tasklet_kill(&execlists->tasklet); + tasklet_disable(&execlists->tasklet); + + /* + * We're using worker to queue preemption requests from the tasklet in + * GuC submission mode. + * Even though tasklet was disabled, we may still have a worker queued. + * Let's make sure that all workers scheduled before disabling the + * tasklet are completed before continuing with the reset. + */ + if (engine->i915->guc.preempt_wq) + flush_workqueue(engine->i915->guc.preempt_wq); + + return i915_gem_find_active_request(engine); +} + static void reset_irq(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; @@ -1692,8 +1730,8 @@ static void reset_irq(struct intel_engine_cs *engine) clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted); } -static void reset_common_ring(struct intel_engine_cs *engine, - struct i915_request *request) +static void execlists_reset(struct intel_engine_cs *engine, + struct i915_request *request) { struct intel_engine_execlists * const execlists = &engine->execlists; struct intel_context *ce; @@ -1766,6 +1804,11 @@ static void reset_common_ring(struct intel_engine_cs *engine, unwind_wa_tail(request); } +static void execlists_reset_finish(struct intel_engine_cs *engine) +{ + tasklet_enable(&engine->execlists.tasklet); +} + static int intel_logical_ring_emit_pdps(struct i915_request *rq) { struct i915_hw_ppgtt *ppgtt = rq->ctx->ppgtt; @@ -2090,7 +2133,10 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine) { /* Default vfuncs which can be overriden by each engine. */ engine->init_hw = gen8_init_common_ring; - engine->reset_hw = reset_common_ring; + + engine->reset.prepare = execlists_reset_prepare; + engine->reset.reset = execlists_reset; + engine->reset.finish = execlists_reset_finish; engine->context_pin = execlists_context_pin; engine->context_unpin = execlists_context_unpin; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 04d9d9a946a7..eebcc877ef60 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -530,8 +530,16 @@ static int init_ring_common(struct intel_engine_cs *engine) return ret; } -static void reset_ring_common(struct intel_engine_cs *engine, - struct i915_request *request) +static struct i915_request *reset_prepare(struct intel_engine_cs *engine) +{ + if (engine->irq_seqno_barrier) + engine->irq_seqno_barrier(engine); + + return i915_gem_find_active_request(engine); +} + +static void reset_ring(struct intel_engine_cs *engine, + struct i915_request *request) { /* * RC6 must be prevented until the reset is complete and the engine @@ -595,6 +603,10 @@ static void reset_ring_common(struct intel_engine_cs *engine, } } +static void reset_finish(struct intel_engine_cs *engine) +{ +} + static int intel_rcs_ctx_init(struct i915_request *rq) { int ret; @@ -1987,7 +1999,9 @@ static void intel_ring_default_vfuncs(struct drm_i915_private *dev_priv, intel_ring_init_semaphores(dev_priv, engine); engine->init_hw = init_ring_common; - engine->reset_hw = reset_ring_common; + engine->reset.prepare = reset_prepare; + engine->reset.reset = reset_ring; + engine->reset.finish = reset_finish; engine->context_pin = intel_ring_context_pin; engine->context_unpin = intel_ring_context_unpin; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 1f50727a5ddb..e2681303ce21 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -418,8 +418,13 @@ struct intel_engine_cs { void (*irq_disable)(struct intel_engine_cs *engine); int (*init_hw)(struct intel_engine_cs *engine); - void (*reset_hw)(struct intel_engine_cs *engine, - struct i915_request *rq); + + struct { + struct i915_request *(*prepare)(struct intel_engine_cs *engine); + void (*reset)(struct intel_engine_cs *engine, + struct i915_request *rq); + void (*finish)(struct intel_engine_cs *engine); + } reset; void (*park)(struct intel_engine_cs *engine); void (*unpark)(struct intel_engine_cs *engine);