From patchwork Thu Jun 27 23:30:32 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2796281 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 32F70BF4A1 for ; Thu, 27 Jun 2013 23:43:53 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C571120204 for ; Thu, 27 Jun 2013 23:43:51 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 6FD43200EE for ; Thu, 27 Jun 2013 23:43:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 41E77E64A0 for ; Thu, 27 Jun 2013 16:43:50 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from shiva.localdomain (unknown [209.20.75.48]) by gabe.freedesktop.org (Postfix) with ESMTP id F1DCCE6388 for ; Thu, 27 Jun 2013 16:28:27 -0700 (PDT) Received: by shiva.localdomain (Postfix, from userid 99) id B0373886AF; Thu, 27 Jun 2013 23:28:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lundgren.jf.intel.com (jfdmzpr02-ext.jf.intel.com [134.134.137.71]) by shiva.localdomain (Postfix) with ESMTPSA id D6E748864E; Thu, 27 Jun 2013 23:28:25 +0000 (UTC) From: Ben Widawsky To: Intel GFX Date: Thu, 27 Jun 2013 16:30:32 -0700 Message-Id: <1372375867-1003-32-git-send-email-ben@bwidawsk.net> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1372375867-1003-1-git-send-email-ben@bwidawsk.net> References: <1372375867-1003-1-git-send-email-ben@bwidawsk.net> Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH 31/66] drm/i915: Create VMAs (part 1) X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Virus-Scanned: ClamAV using ClamSMTP Creates the VMA, but leaves the old obj->gtt_space in place. This primarily just puts the basic infrastructure in place, and helps check for leaks. BISECT WARNING: This patch was not meant for bisect. If it does end up upstream, it should be included in the 3 part series for creating the VMA. v2: s/i915_obj/i915_gem_obj (Chris) v3: Only move an object to the now global unbound list if there are no more VMAs for the object which are bound into a VM (ie. the list is empty). Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_drv.h | 30 ++++++++++++++++++- drivers/gpu/drm/i915/i915_gem.c | 54 ++++++++++++++++++++++++++++++++-- drivers/gpu/drm/i915/i915_gem_evict.c | 8 ++++- drivers/gpu/drm/i915/i915_gem_gtt.c | 3 ++ drivers/gpu/drm/i915/i915_gem_stolen.c | 13 ++++++++ 5 files changed, 104 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 56d47bc..bd4640a 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -539,6 +539,19 @@ struct i915_hw_ppgtt { void (*cleanup)(struct i915_hw_ppgtt *ppgtt); }; +/* To make things as simple as possible (ie. no refcounting), a VMA's lifetime + * will always be <= an objects lifetime. So object refcounting should cover us. + */ +struct i915_vma { + struct i915_address_space *vm; + struct drm_i915_gem_object *obj; + struct drm_mm_node node; + /* Page aligned offset (helper for stolen) */ + unsigned long deferred_offset; + + struct list_head vma_link; /* Link in the object's VMA list */ +}; + struct i915_ctx_hang_stats { /* This context had batch pending when hang was declared */ unsigned batch_pending; @@ -1222,8 +1235,9 @@ struct drm_i915_gem_object { const struct drm_i915_gem_object_ops *ops; - /** Current space allocated to this object in the GTT, if any. */ struct drm_mm_node *gtt_space; + struct list_head vma_list; + /** Stolen memory for this object, instead of being backed by shmem. */ struct drm_mm_node *stolen; struct list_head global_list; @@ -1351,6 +1365,7 @@ struct drm_i915_gem_object { static inline unsigned long i915_gem_obj_offset(struct drm_i915_gem_object *o) { + BUG_ON(list_empty(&o->vma_list)); return o->gtt_space->start; } @@ -1361,6 +1376,7 @@ static inline bool i915_gem_obj_bound(struct drm_i915_gem_object *o) static inline unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o) { + BUG_ON(list_empty(&o->vma_list)); return o->gtt_space->size; } @@ -1370,6 +1386,16 @@ static inline void i915_gem_obj_set_color(struct drm_i915_gem_object *o, o->gtt_space->color = color; } +/* This is a temporary define to help transition us to real VMAs. If you see + * this, you're either reviewing code, or bisecting it. */ +static inline struct i915_vma * +__i915_gem_obj_to_vma(struct drm_i915_gem_object *obj) +{ + BUG_ON(!i915_gem_obj_bound(obj)); + BUG_ON(list_empty(&obj->vma_list)); + return list_first_entry(&obj->vma_list, struct i915_vma, vma_link); +} + /** * Request queue structure. * @@ -1680,6 +1706,8 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, struct drm_i915_gem_object *i915_gem_alloc_object(struct drm_device *dev, size_t size); void i915_gem_free_object(struct drm_gem_object *obj); +struct i915_vma *i915_gem_vma_create(struct drm_i915_gem_object *obj); +void i915_gem_vma_destroy(struct i915_vma *vma); int __must_check i915_gem_object_pin(struct drm_i915_gem_object *obj, uint32_t alignment, diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index dd2228d..a41b2f1 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2585,6 +2585,7 @@ int i915_gem_object_unbind(struct drm_i915_gem_object *obj) { drm_i915_private_t *dev_priv = obj->base.dev->dev_private; + struct i915_vma *vma; int ret; if (!i915_gem_obj_bound(obj)) @@ -2622,13 +2623,22 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj) i915_gem_object_unpin_pages(obj); list_del(&obj->mm_list); - list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list); /* Avoid an unnecessary call to unbind on rebind. */ obj->map_and_fenceable = true; + vma = __i915_gem_obj_to_vma(obj); + list_del(&vma->vma_link); + /* FIXME: drm_mm_remove_node(&vma->node); */ + i915_gem_vma_destroy(vma); + drm_mm_put_block(obj->gtt_space); obj->gtt_space = NULL; + /* Since the unbound list is global, only move to that list if + * no more VMAs exist */ + if (list_empty(&obj->vma_list)) + list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list); + return 0; } @@ -3079,8 +3089,12 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, bool mappable, fenceable; size_t gtt_max = map_and_fenceable ? dev_priv->gtt.mappable_end : dev_priv->gtt.base.total; + struct i915_vma *vma; int ret; + if (WARN_ON(!list_empty(&obj->vma_list))) + return -EBUSY; + fence_size = i915_gem_get_gtt_size(dev, obj->base.size, obj->tiling_mode); @@ -3124,6 +3138,12 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, i915_gem_object_unpin_pages(obj); return -ENOMEM; } + vma = i915_gem_vma_create(obj); + if (vma == NULL) { + kfree(node); + i915_gem_object_unpin_pages(obj); + return -ENOMEM; + } search_free: ret = drm_mm_insert_node_in_range_generic(&i915_gtt_vm->mm, node, @@ -3160,6 +3180,9 @@ search_free: list_add_tail(&obj->mm_list, &i915_gtt_vm->inactive_list); obj->gtt_space = node; + vma->node.start = node->start; + vma->node.size = node->size; + list_add(&vma->vma_link, &obj->vma_list); fenceable = node->size == fence_size && @@ -3317,6 +3340,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj, { struct drm_device *dev = obj->base.dev; drm_i915_private_t *dev_priv = dev->dev_private; + struct drm_mm_node *node = NULL; int ret; if (obj->cache_level == cache_level) @@ -3327,7 +3351,12 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj, return -EBUSY; } - if (!i915_gem_valid_gtt_space(dev, obj->gtt_space, cache_level)) { + if (i915_gem_obj_bound(obj)) { + node = obj->gtt_space; + BUG_ON(node->start != __i915_gem_obj_to_vma(obj)->node.start); + } + + if (!i915_gem_valid_gtt_space(dev, node, cache_level)) { ret = i915_gem_object_unbind(obj); if (ret) return ret; @@ -3872,6 +3901,7 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, INIT_LIST_HEAD(&obj->global_list); INIT_LIST_HEAD(&obj->ring_list); INIT_LIST_HEAD(&obj->exec_list); + INIT_LIST_HEAD(&obj->vma_list); obj->ops = ops; @@ -3992,6 +4022,26 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj) i915_gem_object_free(obj); } +struct i915_vma *i915_gem_vma_create(struct drm_i915_gem_object *obj) +{ + struct drm_i915_private *dev_priv = obj->base.dev->dev_private; + struct i915_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL); + if (vma == NULL) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&vma->vma_link); + vma->vm = i915_gtt_vm; + vma->obj = obj; + + return vma; +} + +void i915_gem_vma_destroy(struct i915_vma *vma) +{ + WARN_ON(vma->node.allocated); + kfree(vma); +} + int i915_gem_idle(struct drm_device *dev) { diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c index 92856a2..0434c9e 100644 --- a/drivers/gpu/drm/i915/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/i915_gem_evict.c @@ -38,6 +38,8 @@ mark_free(struct drm_i915_gem_object *obj, struct list_head *unwind) return false; list_add(&obj->exec_list, unwind); + BUG_ON(__i915_gem_obj_to_vma(obj)->node.start != + i915_gem_obj_offset(obj)); return drm_mm_scan_add_block(obj->gtt_space); } @@ -48,6 +50,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size, { drm_i915_private_t *dev_priv = dev->dev_private; struct list_head eviction_list, unwind_list; + struct i915_vma *vma; struct drm_i915_gem_object *obj; int ret = 0; @@ -106,7 +109,8 @@ none: obj = list_first_entry(&unwind_list, struct drm_i915_gem_object, exec_list); - + vma = __i915_gem_obj_to_vma(obj); + BUG_ON(vma->node.start != i915_gem_obj_offset(obj)); ret = drm_mm_scan_remove_block(obj->gtt_space); BUG_ON(ret); @@ -127,6 +131,8 @@ found: obj = list_first_entry(&unwind_list, struct drm_i915_gem_object, exec_list); + vma = __i915_gem_obj_to_vma(obj); + BUG_ON(vma->node.start != i915_gem_obj_offset(obj)); if (drm_mm_scan_remove_block(obj->gtt_space)) { list_move(&obj->exec_list, &eviction_list); drm_gem_object_reference(&obj->base); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 566ab76..b59f846 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -687,6 +687,8 @@ void i915_gem_setup_global_gtt(struct drm_device *dev, i915_gem_obj_offset(obj), obj->base.size); BUG_ON((gtt_offset & I915_GTT_RESERVED) == 0); + BUG_ON((__i915_gem_obj_to_vma(obj)->deferred_offset + & I915_GTT_RESERVED) == 0); gtt_offset = gtt_offset & ~I915_GTT_RESERVED; obj->gtt_space = kzalloc(sizeof(*obj->gtt_space), GFP_KERNEL); if (!obj->gtt_space) { @@ -700,6 +702,7 @@ void i915_gem_setup_global_gtt(struct drm_device *dev, if (ret) DRM_DEBUG_KMS("Reservation failed\n"); obj->has_global_gtt_mapping = 1; + list_add(&__i915_gem_obj_to_vma(obj)->vma_link, &obj->vma_list); } i915_gtt_vm->start = start; diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index 925f3b1..6e22355 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -330,6 +330,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_gem_object *obj; struct drm_mm_node *stolen; + struct i915_vma *vma; int ret; if (dev_priv->gtt.stolen_base == 0) @@ -368,6 +369,12 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, if (gtt_offset == -1) return obj; + vma = i915_gem_vma_create(obj); + if (!vma) { + drm_gem_object_unreference(&obj->base); + return NULL; + } + /* To simplify the initialisation sequence between KMS and GTT, * we allow construction of the stolen object prior to * setting up the GTT space. The actual reservation will occur @@ -376,6 +383,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, if (drm_mm_initialized(&i915_gtt_vm->mm)) { obj->gtt_space = kzalloc(sizeof(*obj->gtt_space), GFP_KERNEL); if (!obj->gtt_space) { + i915_gem_vma_destroy(vma); drm_gem_object_unreference(&obj->base); return NULL; } @@ -383,15 +391,20 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, gtt_offset, size); if (ret) { DRM_DEBUG_KMS("failed to allocate stolen GTT space\n"); + i915_gem_vma_destroy(vma); drm_gem_object_unreference(&obj->base); kfree(obj->gtt_space); return NULL; } + vma->node.start = obj->gtt_space->start; + vma->node.size = obj->gtt_space->size; obj->gtt_space->start = gtt_offset; + list_add(&vma->vma_link, &obj->vma_list); } else { /* NB: Safe because we assert page alignment */ obj->gtt_space = (struct drm_mm_node *) ((uintptr_t)gtt_offset | I915_GTT_RESERVED); + vma->deferred_offset = gtt_offset | I915_GTT_RESERVED; } obj->has_global_gtt_mapping = 1;