From patchwork Sat Apr 16 09:17:41 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 712091 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3G9Lgtu009910 for ; Sat, 16 Apr 2011 09:22:08 GMT Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BD65D9EB41 for ; Sat, 16 Apr 2011 02:21:42 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (server109-228-6-236.live-servers.net [109.228.6.236]) by gabe.freedesktop.org (Postfix) with ESMTP id 53CB59E890 for ; Sat, 16 Apr 2011 02:18:10 -0700 (PDT) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.66.37; Received: from arrandale.alporthouse.com (unverified [78.156.66.37]) by fireflyinternet.com (Firefly Internet SMTP) with ESMTP id 32233682-1500050 for multiple; Sat, 16 Apr 2011 10:17:54 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Sat, 16 Apr 2011 10:17:41 +0100 Message-Id: <1302945465-32115-18-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1302945465-32115-1-git-send-email-chris@chris-wilson.co.uk> References: <1302945465-32115-1-git-send-email-chris@chris-wilson.co.uk> X-Originating-IP: 78.156.66.37 Subject: [Intel-gfx] [PATCH 17/21] drm/i915: Repeat retiring of requests until the seqno is stable X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.11 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Sat, 16 Apr 2011 09:22:08 +0000 (UTC) The list walking over objects and requests to retire may take a significant amount of time, enough time for the GPU to have finished more batches. So repeat the list walking until the GPU seqno is stable. Signed-off-by: Chris Wilson Reviewed-by: Daniel Vetter --- drivers/gpu/drm/i915/i915_gem.c | 66 ++++++++++++++++++++------------------- 1 files changed, 34 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index b9515ac..58e77d6 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1952,47 +1952,45 @@ i915_gem_retire_requests_ring(struct intel_ring_buffer *ring) WARN_ON(i915_verify_lists(ring->dev)); - seqno = ring->get_seqno(ring); + do { + seqno = ring->get_seqno(ring); - for (i = 0; i < ARRAY_SIZE(ring->sync_seqno); i++) - if (seqno >= ring->sync_seqno[i]) - ring->sync_seqno[i] = 0; + while (!list_empty(&ring->request_list)) { + struct drm_i915_gem_request *request; - while (!list_empty(&ring->request_list)) { - struct drm_i915_gem_request *request; + request = list_first_entry(&ring->request_list, + struct drm_i915_gem_request, + list); - request = list_first_entry(&ring->request_list, - struct drm_i915_gem_request, - list); - - if (!i915_seqno_passed(seqno, request->seqno)) - break; + if (!i915_seqno_passed(seqno, request->seqno)) + break; - trace_i915_gem_request_retire(ring, request->seqno); + trace_i915_gem_request_retire(ring, request->seqno); - list_del(&request->list); - i915_gem_request_remove_from_client(request); - kfree(request); - } + list_del(&request->list); + i915_gem_request_remove_from_client(request); + kfree(request); + } - /* Move any buffers on the active list that are no longer referenced - * by the ringbuffer to the flushing/inactive lists as appropriate. - */ - while (!list_empty(&ring->active_list)) { - struct drm_i915_gem_object *obj; + /* Move any buffers on the active list that are no longer referenced + * by the ringbuffer to the flushing/inactive lists as appropriate. + */ + while (!list_empty(&ring->active_list)) { + struct drm_i915_gem_object *obj; - obj= list_first_entry(&ring->active_list, - struct drm_i915_gem_object, - ring_list); + obj = list_first_entry(&ring->active_list, + struct drm_i915_gem_object, + ring_list); - if (!i915_seqno_passed(seqno, obj->last_rendering_seqno)) - break; + if (!i915_seqno_passed(seqno, obj->last_rendering_seqno)) + break; - if (obj->base.write_domain != 0) - i915_gem_object_move_to_flushing(obj); - else - i915_gem_object_move_to_inactive(obj); - } + if (obj->base.write_domain != 0) + i915_gem_object_move_to_flushing(obj); + else + i915_gem_object_move_to_inactive(obj); + } + } while (seqno != ring->get_seqno(ring)); if (unlikely(ring->trace_irq_seqno && i915_seqno_passed(seqno, ring->trace_irq_seqno))) { @@ -2000,6 +1998,10 @@ i915_gem_retire_requests_ring(struct intel_ring_buffer *ring) ring->trace_irq_seqno = 0; } + for (i = 0; i < ARRAY_SIZE(ring->sync_seqno); i++) + if (seqno >= ring->sync_seqno[i]) + ring->sync_seqno[i] = 0; + WARN_ON(i915_verify_lists(ring->dev)); }