From patchwork Fri Jun 17 07:09:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: arun.siluvery@linux.intel.com X-Patchwork-Id: 9182711 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 46E816075F for ; Fri, 17 Jun 2016 07:10:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3401128396 for ; Fri, 17 Jun 2016 07:10:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 26FE728399; Fri, 17 Jun 2016 07:10:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C7661FF21 for ; Fri, 17 Jun 2016 07:10:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D83CA6EAED; Fri, 17 Jun 2016 07:09:50 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTP id C4E1F6EAE3 for ; Fri, 17 Jun 2016 07:09:47 +0000 (UTC) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP; 17 Jun 2016 00:09:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.26,482,1459839600"; d="scan'208"; a="1003894838" Received: from asiluver-linux.isw.intel.com ([10.102.226.117]) by fmsmga002.fm.intel.com with ESMTP; 17 Jun 2016 00:09:42 -0700 From: Arun Siluvery To: intel-gfx@lists.freedesktop.org Date: Fri, 17 Jun 2016 08:09:11 +0100 Message-Id: <1466147355-4635-12-git-send-email-arun.siluvery@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1466147355-4635-1-git-send-email-arun.siluvery@linux.intel.com> References: <1466147355-4635-1-git-send-email-arun.siluvery@linux.intel.com> Cc: Tomas Elf Subject: [Intel-gfx] [PATCH v2 11/15] drm/i915: Port of Added scheduler support to __wait_request() calls X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP This is a partial port of the following patch from John Harrison's GPU scheduler patch series: (patch sent to Intel-GFX with the subject line "[Intel-gfx] [RFC 19/39] drm/i915: Added scheduler support to __wait_request() calls" on Fri 17 July 2015) Author: John Harrison Date: Thu Apr 10 10:48:55 2014 +0100 Subject: drm/i915: Added scheduler support to __wait_request() calls Removed all scheduler references and backported it to this baseline. The reason we need this is because Chris Wilson has pointed out that threads that don't hold the struct_mutex should not be thrown out of __i915_wait_request during TDR hang recovery. Therefore we need a way to determine which threads are holding the mutex and which are not. Cc: Chris Wilson Cc: Mika Kuoppala Signed-off-by: Tomas Elf Signed-off-by: John Harrison Signed-off-by: Arun Siluvery --- drivers/gpu/drm/i915/i915_drv.h | 7 ++++++- drivers/gpu/drm/i915/i915_gem.c | 34 ++++++++++++++++++++++----------- drivers/gpu/drm/i915/intel_display.c | 5 +++-- drivers/gpu/drm/i915/intel_ringbuffer.c | 8 +++++--- 4 files changed, 37 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index b2105bb..3e02b41 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -3350,8 +3350,13 @@ void __i915_add_request(struct drm_i915_gem_request *req, __i915_add_request(req, NULL, true) #define i915_add_request_no_flush(req) \ __i915_add_request(req, NULL, false) + +/* flags used by users of __i915_wait_request */ +#define I915_WAIT_REQUEST_INTERRUPTIBLE (1 << 0) +#define I915_WAIT_REQUEST_LOCKED (1 << 1) + int __i915_wait_request(struct drm_i915_gem_request *req, - bool interruptible, + u32 flags, s64 *timeout, struct intel_rps_client *rps); int __must_check i915_wait_request(struct drm_i915_gem_request *req); diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index bc404da..b0c2263 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1455,7 +1455,9 @@ static int __i915_spin_request(struct drm_i915_gem_request *req, int state) /** * __i915_wait_request - wait until execution of request has finished * @req: duh! - * @interruptible: do an interruptible wait (normally yes) + * @flags: flags to define the nature of wait + * I915_WAIT_INTERRUPTIBLE - do an interruptible wait (normally yes) + * I915_WAIT_LOCKED - caller is holding struct_mutex * @timeout: in - how long to wait (NULL forever); out - how much time remaining * @rps: RPS client * @@ -1470,7 +1472,7 @@ static int __i915_spin_request(struct drm_i915_gem_request *req, int state) * errno with remaining time filled in timeout argument. */ int __i915_wait_request(struct drm_i915_gem_request *req, - bool interruptible, + u32 flags, s64 *timeout, struct intel_rps_client *rps) { @@ -1478,6 +1480,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req, struct drm_i915_private *dev_priv = req->i915; const bool irq_test_in_progress = ACCESS_ONCE(dev_priv->gpu_error.test_irq_rings) & intel_engine_flag(engine); + bool interruptible = flags & I915_WAIT_REQUEST_INTERRUPTIBLE; int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE; DEFINE_WAIT(wait); unsigned long timeout_expire; @@ -1526,6 +1529,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req, for (;;) { struct timer_list timer; int reset_pending; + bool locked = flags & I915_WAIT_REQUEST_LOCKED; prepare_to_wait(&engine->irq_queue, &wait, state); @@ -1543,7 +1547,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req, reset_pending = i915_engine_reset_pending(&dev_priv->gpu_error, NULL); - if (reset_pending) { + if (reset_pending || locked) { ret = -EAGAIN; break; } @@ -1705,14 +1709,15 @@ int i915_wait_request(struct drm_i915_gem_request *req) { struct drm_i915_private *dev_priv = req->i915; - bool interruptible; + u32 flags; int ret; - interruptible = dev_priv->mm.interruptible; - BUG_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex)); - ret = __i915_wait_request(req, interruptible, NULL, NULL); + flags = dev_priv->mm.interruptible ? I915_WAIT_REQUEST_INTERRUPTIBLE : 0; + flags |= I915_WAIT_REQUEST_LOCKED; + + ret = __i915_wait_request(req, flags, NULL, NULL); if (ret) return ret; @@ -1824,7 +1829,9 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj, mutex_unlock(&dev->struct_mutex); ret = 0; for (i = 0; ret == 0 && i < n; i++) - ret = __i915_wait_request(requests[i], true, NULL, rps); + ret = __i915_wait_request(requests[i], + I915_WAIT_REQUEST_INTERRUPTIBLE, + NULL, rps); mutex_lock(&dev->struct_mutex); for (i = 0; i < n; i++) { @@ -3442,7 +3449,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file) for (i = 0; i < n; i++) { if (ret == 0) - ret = __i915_wait_request(req[i], true, + ret = __i915_wait_request(req[i], I915_WAIT_REQUEST_INTERRUPTIBLE, args->timeout_ns > 0 ? &args->timeout_ns : NULL, to_rps_client(file)); i915_gem_request_unreference(req[i]); @@ -3473,8 +3480,13 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj, if (!i915_semaphore_is_enabled(to_i915(obj->base.dev))) { struct drm_i915_private *i915 = to_i915(obj->base.dev); + u32 flags; + + flags = i915->mm.interruptible ? I915_WAIT_REQUEST_INTERRUPTIBLE : 0; + flags |= I915_WAIT_REQUEST_LOCKED; + ret = __i915_wait_request(from_req, - i915->mm.interruptible, + flags, NULL, &i915->rps.semaphores); if (ret) @@ -4476,7 +4488,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file) if (target == NULL) return 0; - ret = __i915_wait_request(target, true, NULL, NULL); + ret = __i915_wait_request(target, I915_WAIT_REQUEST_INTERRUPTIBLE, NULL, NULL); if (ret == 0) queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0); diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 095f83e..fa29091 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -11546,7 +11546,7 @@ static void intel_mmio_flip_work_func(struct work_struct *w) if (work->flip_queued_req) WARN_ON(__i915_wait_request(work->flip_queued_req, - false, NULL, + 0, NULL, &dev_priv->rps.mmioflips)); /* For framebuffer backed by dmabuf, wait for fence */ @@ -13602,7 +13602,8 @@ static int intel_atomic_prepare_commit(struct drm_device *dev, continue; ret = __i915_wait_request(intel_plane_state->wait_req, - true, NULL, NULL); + I915_WAIT_REQUEST_INTERRUPTIBLE, + NULL, NULL); if (ret) { /* Any hang should be swallowed by the wait */ WARN_ON(ret == -EIO); diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index fedd270..8d34f1c 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -2414,6 +2414,7 @@ void intel_cleanup_engine(struct intel_engine_cs *engine) int intel_engine_idle(struct intel_engine_cs *engine) { struct drm_i915_gem_request *req; + u32 flags; /* Wait upon the last request to be completed */ if (list_empty(&engine->request_list)) @@ -2423,10 +2424,11 @@ int intel_engine_idle(struct intel_engine_cs *engine) struct drm_i915_gem_request, list); + flags = req->i915->mm.interruptible ? I915_WAIT_REQUEST_INTERRUPTIBLE : 0; + flags |= I915_WAIT_REQUEST_LOCKED; + /* Make sure we do not trigger any retires */ - return __i915_wait_request(req, - req->i915->mm.interruptible, - NULL, NULL); + return __i915_wait_request(req, flags, NULL, NULL); } int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)