From patchwork Thu Aug 1 00:00:17 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2836653 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 045E1C0319 for ; Thu, 1 Aug 2013 00:22:53 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 09F4D20216 for ; Thu, 1 Aug 2013 00:22:52 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id E6941201F0 for ; Thu, 1 Aug 2013 00:22:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EA9ACE7BEA for ; Wed, 31 Jul 2013 17:22:50 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail.bwidawsk.net (bwidawsk.net [166.78.191.112]) by gabe.freedesktop.org (Postfix) with ESMTP id 38375E6308 for ; Wed, 31 Jul 2013 17:00:57 -0700 (PDT) Received: by mail.bwidawsk.net (Postfix, from userid 5001) id CB53D5963E; Wed, 31 Jul 2013 17:00:56 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-5.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lundgren.kumite (c-24-21-100-90.hsd1.or.comcast.net [24.21.100.90]) by mail.bwidawsk.net (Postfix) with ESMTPSA id 2E3195826E; Wed, 31 Jul 2013 17:00:41 -0700 (PDT) From: Ben Widawsky To: Intel GFX Date: Wed, 31 Jul 2013 17:00:17 -0700 Message-Id: <1375315222-4785-25-git-send-email-ben@bwidawsk.net> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1375315222-4785-1-git-send-email-ben@bwidawsk.net> References: <1375315222-4785-1-git-send-email-ben@bwidawsk.net> Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH 24/29] drm/i915: create vmas at execbuf X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Virus-Scanned: ClamAV using ClamSMTP In order to transition more of our code over to using a VMA instead of an pair - we must have the vma accessible at execbuf time. Up until now, we've only had a VMA when actually binding an object. The previous patch helped handle the distinction on bound vs. unbound. This patch will help us catch leaks, and other issues before we actually shuffle a bunch of stuff around. The subsequent patch to fix up the rest of execbuf should be mostly just moving code around, and this is the major functional change. v2: Release table_lock earlier so vma allocation needn't be atomic. (Chris) Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_drv.h | 3 +++ drivers/gpu/drm/i915/i915_gem.c | 25 ++++++++++++++++++------- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 18 +++++++++++++----- 3 files changed, 34 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index f6c2812..c0eb7fd 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1857,6 +1857,9 @@ unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o, struct i915_address_space *vm); struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj, struct i915_address_space *vm); +struct i915_vma * +i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj, + struct i915_address_space *vm); /* Some GGTT VM helpers */ #define obj_to_ggtt(obj) \ (&((struct drm_i915_private *)(obj)->base.dev->dev_private)->gtt.base) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 21331d8..72bd53c 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -3101,8 +3101,7 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj, struct i915_vma *vma; int ret; - if (WARN_ON(!list_empty(&obj->vma_list))) - return -EBUSY; + BUG_ON(!i915_is_ggtt(vm)); fence_size = i915_gem_get_gtt_size(dev, obj->base.size, @@ -3142,16 +3141,15 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj, i915_gem_object_pin_pages(obj); - /* FIXME: For now we only ever use 1 VMA per object */ - BUG_ON(!i915_is_ggtt(vm)); - WARN_ON(!list_empty(&obj->vma_list)); - - vma = i915_gem_vma_create(obj, vm); + vma = i915_gem_obj_lookup_or_create_vma(obj, vm); if (IS_ERR(vma)) { i915_gem_object_unpin_pages(obj); return PTR_ERR(vma); } + /* For now we only ever use 1 vma per object */ + WARN_ON(!list_is_singular(&obj->vma_list)); + search_free: ret = drm_mm_insert_node_in_range_generic(&vm->mm, &vma->node, size, alignment, @@ -4800,3 +4798,16 @@ struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj, return NULL; } + +struct i915_vma * +i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj, + struct i915_address_space *vm) +{ + struct i915_vma *vma; + + vma = i915_gem_obj_to_vma(obj, vm); + if (!vma) + vma = i915_gem_vma_create(obj, vm); + + return vma; +} diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 0f21702..3f17a55 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -85,14 +85,14 @@ static int eb_lookup_objects(struct eb_objects *eb, struct drm_i915_gem_exec_object2 *exec, const struct drm_i915_gem_execbuffer2 *args, + struct i915_address_space *vm, struct drm_file *file) { + struct drm_i915_gem_object *obj; int i; spin_lock(&file->table_lock); for (i = 0; i < args->buffer_count; i++) { - struct drm_i915_gem_object *obj; - obj = to_intel_bo(idr_find(&file->object_idr, exec[i].handle)); if (obj == NULL) { spin_unlock(&file->table_lock); @@ -110,6 +110,15 @@ eb_lookup_objects(struct eb_objects *eb, drm_gem_object_reference(&obj->base); list_add_tail(&obj->exec_list, &eb->objects); + } + spin_unlock(&file->table_lock); + + list_for_each_entry(obj, &eb->objects, exec_list) { + struct i915_vma *vma; + + vma = i915_gem_obj_lookup_or_create_vma(obj, vm); + if (IS_ERR(vma)) + return PTR_ERR(vma); obj->exec_entry = &exec[i]; if (eb->and < 0) { @@ -121,7 +130,6 @@ eb_lookup_objects(struct eb_objects *eb, &eb->buckets[handle & eb->and]); } } - spin_unlock(&file->table_lock); return 0; } @@ -672,7 +680,7 @@ i915_gem_execbuffer_relocate_slow(struct drm_device *dev, /* reacquire the objects */ eb_reset(eb); - ret = eb_lookup_objects(eb, exec, args, file); + ret = eb_lookup_objects(eb, exec, args, vm, file); if (ret) goto err; @@ -1009,7 +1017,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, } /* Look up object handles */ - ret = eb_lookup_objects(eb, exec, args, file); + ret = eb_lookup_objects(eb, exec, args, vm, file); if (ret) goto err;