From patchwork Fri Jan 29 19:19:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Gordon X-Patchwork-Id: 8166681 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 62B869FE6C for ; Fri, 29 Jan 2016 19:19:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 73A6020253 for ; Fri, 29 Jan 2016 19:19:56 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 2BB18200DC for ; Fri, 29 Jan 2016 19:19:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 910726EA0A; Fri, 29 Jan 2016 11:19:53 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTP id D3B116EA0A for ; Fri, 29 Jan 2016 11:19:48 -0800 (PST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP; 29 Jan 2016 11:19:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,365,1449561600"; d="scan'208";a="643642187" Received: from dsgordon-linux2.isw.intel.com ([10.102.226.88]) by FMSMGA003.fm.intel.com with ESMTP; 29 Jan 2016 11:19:47 -0800 From: Dave Gordon To: intel-gfx@lists.freedesktop.org Date: Fri, 29 Jan 2016 19:19:31 +0000 Message-Id: <1454095171-22475-7-git-send-email-david.s.gordon@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1454095171-22475-1-git-send-email-david.s.gordon@intel.com> References: <1454095171-22475-1-git-send-email-david.s.gordon@intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Cc: Daniel Vetter Subject: [Intel-gfx] [PATCH v4 6/6] drm/i915: fix context/engine cleanup order X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Nick Hoath Swap the order of context & engine cleanup, so that contexts are cleaned up first, and *then* engines. This is a more sensible order anyway, but in particular has become necessary since the 'intel_ring_initialized() must be simple and inline' patch, which now uses ring->dev as an 'initialised' flag, so it can now be NULL after engine teardown. This in turn can cause a problem in the context code, which (used to) check the ring->dev->struct_mutex -- causing a fault if ring->dev was NULL. Also rename the cleanup function to reflect what it actually does (cleanup engines, not a ringbuffer), and fix an annoying whitespace issue. v2: Also make the fix in i915_load_modeset_init, not just in i915_driver_unload (Chris Wilson) v3: Had extra stuff in it. v4: Reverted extra stuff (so we're back to v2). Rebased and updated commentary above (Dave Gordon). Signed-off-by: Nick Hoath Signed-off-by: David Gordon Reviewed-by: Chris Wilson Cc: Mika Kuoppala Cc: Daniel Vetter Cc: Chris Wilson Signed-off-by: Dave Gordon --- drivers/gpu/drm/i915/i915_dma.c | 4 ++-- drivers/gpu/drm/i915/i915_drv.h | 2 +- drivers/gpu/drm/i915/i915_gem.c | 23 ++++++++++++----------- 3 files changed, 15 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c index db9b0c6..3eb6844 100644 --- a/drivers/gpu/drm/i915/i915_dma.c +++ b/drivers/gpu/drm/i915/i915_dma.c @@ -444,8 +444,8 @@ static int i915_load_modeset_init(struct drm_device *dev) cleanup_gem: mutex_lock(&dev->struct_mutex); - i915_gem_cleanup_ringbuffer(dev); i915_gem_context_fini(dev); + i915_gem_cleanup_engines(dev); mutex_unlock(&dev->struct_mutex); cleanup_irq: intel_guc_ucode_fini(dev); @@ -1220,8 +1220,8 @@ int i915_driver_unload(struct drm_device *dev) intel_guc_ucode_fini(dev); mutex_lock(&dev->struct_mutex); - i915_gem_cleanup_ringbuffer(dev); i915_gem_context_fini(dev); + i915_gem_cleanup_engines(dev); mutex_unlock(&dev->struct_mutex); intel_fbc_cleanup_cfb(dev_priv); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 905e90f..8baaca7 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -3020,7 +3020,7 @@ int i915_gem_init_rings(struct drm_device *dev); int __must_check i915_gem_init_hw(struct drm_device *dev); int i915_gem_l3_remap(struct drm_i915_gem_request *req, int slice); void i915_gem_init_swizzling(struct drm_device *dev); -void i915_gem_cleanup_ringbuffer(struct drm_device *dev); +void i915_gem_cleanup_engines(struct drm_device *dev); int __must_check i915_gpu_idle(struct drm_device *dev); int __must_check i915_gem_suspend(struct drm_device *dev); void __i915_add_request(struct drm_i915_gem_request *req, diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 5a4d468..4ee6df9 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -4912,7 +4912,7 @@ int i915_gem_init_rings(struct drm_device *dev) req = i915_gem_request_alloc(ring, NULL); if (IS_ERR(req)) { ret = PTR_ERR(req); - i915_gem_cleanup_ringbuffer(dev); + i915_gem_cleanup_engines(dev); goto out; } @@ -4925,7 +4925,7 @@ int i915_gem_init_rings(struct drm_device *dev) if (ret && ret != -EIO) { DRM_ERROR("PPGTT enable ring #%d failed %d\n", i, ret); i915_gem_request_cancel(req); - i915_gem_cleanup_ringbuffer(dev); + i915_gem_cleanup_engines(dev); goto out; } @@ -4933,7 +4933,7 @@ int i915_gem_init_rings(struct drm_device *dev) if (ret && ret != -EIO) { DRM_ERROR("Context enable ring #%d failed %d\n", i, ret); i915_gem_request_cancel(req); - i915_gem_cleanup_ringbuffer(dev); + i915_gem_cleanup_engines(dev); goto out; } @@ -5011,7 +5011,7 @@ int i915_gem_init(struct drm_device *dev) } void -i915_gem_cleanup_ringbuffer(struct drm_device *dev) +i915_gem_cleanup_engines(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; struct intel_engine_cs *ring; @@ -5020,13 +5020,14 @@ int i915_gem_init(struct drm_device *dev) for_each_ring(ring, dev_priv, i) dev_priv->gt.cleanup_ring(ring); - if (i915.enable_execlists) - /* - * Neither the BIOS, ourselves or any other kernel - * expects the system to be in execlists mode on startup, - * so we need to reset the GPU back to legacy mode. - */ - intel_gpu_reset(dev); + if (i915.enable_execlists) { + /* + * Neither the BIOS, ourselves or any other kernel + * expects the system to be in execlists mode on startup, + * so we need to reset the GPU back to legacy mode. + */ + intel_gpu_reset(dev); + } } static void