From patchwork Sun Mar 20 08:58:55 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 646461 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p2K93Ui7030531 for ; Sun, 20 Mar 2011 09:03:50 GMT Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3BE1D9E7EF for ; Sun, 20 Mar 2011 02:03:30 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (server109-228-6-236.live-servers.net [109.228.6.236]) by gabe.freedesktop.org (Postfix) with ESMTP id 303409E7CB for ; Sun, 20 Mar 2011 01:59:14 -0700 (PDT) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.66.37; Received: from arrandale.alporthouse.com (unverified [78.156.66.37]) by fireflyinternet.com (Firefly Internet SMTP) with ESMTP id 29638443-1500050 for multiple; Sun, 20 Mar 2011 09:01:05 +0000 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Sun, 20 Mar 2011 08:58:55 +0000 Message-Id: <1300611539-24791-12-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1300611539-24791-1-git-send-email-chris@chris-wilson.co.uk> References: <1300611539-24791-1-git-send-email-chris@chris-wilson.co.uk> X-Originating-IP: 78.156.66.37 Cc: Andy Whitcroft Subject: [Intel-gfx] [PATCH 11/15] drm/i915: Cleanup handling of last_fenced_seqno X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.11 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Sun, 20 Mar 2011 09:03:50 +0000 (UTC) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 5201f82..73ede9e 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2461,14 +2461,21 @@ i915_gem_object_flush_fence(struct drm_i915_gem_object *obj, obj->fenced_gpu_access = false; } + if (obj->last_fenced_seqno && + ring_passed_seqno(obj->ring, obj->last_fenced_seqno)) + obj->last_fenced_seqno = 0; + if (obj->last_fenced_seqno && pipelined != obj->ring) { - if (!ring_passed_seqno(obj->ring, obj->last_fenced_seqno)) { - ret = i915_wait_request(obj->ring, - obj->last_fenced_seqno); - if (ret) - return ret; - } + ret = i915_wait_request(obj->ring, + obj->last_fenced_seqno); + if (ret) + return ret; + /* Since last_fence_seqno can retire much earlier than + * last_rendering_seqno, we track that here for efficiency. + * (With a catch-all in move_to_inactive() to prevent very + * old seqno from lying around.) + */ obj->last_fenced_seqno = 0; } @@ -2553,7 +2560,6 @@ i915_find_fence_reg(struct drm_device *dev, * i915_gem_object_get_fence - set up a fence reg for an object * @obj: object to map through a fence reg * @pipelined: ring on which to queue the change, or NULL for CPU access - * @interruptible: must we wait uninterruptibly for the register to retire? * * When mapping objects through the GTT, userspace wants to be able to write * to them without having to worry about swizzling if the object is tiled. @@ -2561,6 +2567,10 @@ i915_find_fence_reg(struct drm_device *dev, * This function walks the fence regs looking for a free one for @obj, * stealing one if it can't find any. * + * Note: if two fence registers point to the same or overlapping memory region + * the results are undefined. This is even more fun with asynchronous updates + * via the GPU! + * * It then sets up the reg based on the object's properties: address, pitch * and tiling format. */ @@ -2586,9 +2596,6 @@ i915_gem_object_get_fence(struct drm_i915_gem_object *obj, if (ret) return ret; - if (!obj->fenced_gpu_access && !obj->last_fenced_seqno) - pipelined = NULL; - goto update; } @@ -2606,9 +2613,12 @@ i915_gem_object_get_fence(struct drm_i915_gem_object *obj, reg->setup_ring = NULL; } } else if (obj->last_fenced_seqno && obj->ring != pipelined) { - ret = i915_gem_object_flush_fence(obj, pipelined); + ret = i915_wait_request(obj->ring, + obj->last_fenced_seqno); if (ret) return ret; + + obj->last_fenced_seqno = 0; } return 0; @@ -2648,15 +2658,22 @@ i915_gem_object_get_fence(struct drm_i915_gem_object *obj, old->last_fenced_seqno); } + obj->last_fenced_seqno = old->last_fenced_seqno; drm_gem_object_unreference(&old->base); - } else if (obj->last_fenced_seqno == 0) - pipelined = NULL; + } reg->obj = obj; list_move_tail(®->lru_list, &dev_priv->mm.fence_list); obj->fence_reg = reg - dev_priv->fence_regs; update: + /* If we had a pipelined request, but there is no pending GPU access or + * update to a fence register for this memory region, we can write + * the new fence register immediately. + */ + if (obj->last_fenced_seqno == 0) + pipelined = NULL; + reg->setup_seqno = pipelined ? i915_gem_next_request_seqno(pipelined) : 0; reg->setup_ring = pipelined;