From patchwork Fri Jun 20 10:02:10 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: sourab.gupta@intel.com X-Patchwork-Id: 4387641 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7EEA1BEEAA for ; Fri, 20 Jun 2014 10:01:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8F65720351 for ; Fri, 20 Jun 2014 10:01:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id EBD5C2037F for ; Fri, 20 Jun 2014 10:01:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 82BCC6E9D1; Fri, 20 Jun 2014 03:01:38 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by gabe.freedesktop.org (Postfix) with ESMTP id A572A6E9D1 for ; Fri, 20 Jun 2014 03:01:37 -0700 (PDT) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 20 Jun 2014 03:01:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.01,513,1400050800"; d="scan'208";a="531475090" Received: from sourabgu-desktop.iind.intel.com ([10.223.82.83]) by orsmga001.jf.intel.com with ESMTP; 20 Jun 2014 03:01:34 -0700 From: sourab.gupta@intel.com To: intel-gfx@lists.freedesktop.org Date: Fri, 20 Jun 2014 15:32:10 +0530 Message-Id: <1403258530-12548-5-git-send-email-sourab.gupta@intel.com> X-Mailer: git-send-email 1.8.5.1 In-Reply-To: <1403258530-12548-1-git-send-email-sourab.gupta@intel.com> References: <1403258530-12548-1-git-send-email-sourab.gupta@intel.com> Cc: Daniel Vetter , Akash Goel , "Gupta, Sourab" Subject: [Intel-gfx] [PATCH v2 4/4] drm/i915: Add support for stealing purgable stolen pages X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Chris Wilson If we run out of stolen memory when trying to allocate an object, see if we can reap enough purgeable objects to free up enough contiguous free space for the allocation. This is in principle very much like evicting objects to free up enough contiguous space in the vma when binding a new object - and you will be forgiven for thinking that the code looks very similar. At the moment, we do not allow userspace to allocate objects in stolen, so there is neither the memory pressure to trigger stolen eviction nor any purgeable objects inside the stolen arena. However, this will change in the near future, and so better management and defragmentation of stolen memory will become a real issue. v2: Remember to remove the drm_mm_node. testcase: igt/gem_create2 Signed-off-by: Chris Wilson Cc: "Gupta, Sourab" Cc: "Goel, Akash" --- drivers/gpu/drm/i915/i915_gem_stolen.c | 121 ++++++++++++++++++++++++++++++--- 1 file changed, 110 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index 6441178..042ae61 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -340,18 +340,29 @@ cleanup: return NULL; } -struct drm_i915_gem_object * -i915_gem_object_create_stolen(struct drm_device *dev, u32 size) +static bool mark_free(struct drm_i915_gem_object *obj, struct list_head *unwind) +{ + if (obj->stolen == NULL) + return false; + + if (obj->madv != I915_MADV_DONTNEED) + return false; + + if (i915_gem_obj_is_pinned(obj)) + return false; + + list_add(&obj->obj_exec_link, unwind); + return drm_mm_scan_add_block(obj->stolen); +} + +static struct drm_mm_node * +stolen_alloc(struct drm_i915_private *dev_priv, u32 size) { - struct drm_i915_private *dev_priv = dev->dev_private; - struct drm_i915_gem_object *obj; struct drm_mm_node *stolen; + struct drm_i915_gem_object *obj; + struct list_head unwind, evict; int ret; - if (!drm_mm_initialized(&dev_priv->mm.stolen)) - return NULL; - - DRM_DEBUG_KMS("creating stolen object: size=%x\n", size); if (size == 0) return NULL; @@ -361,11 +372,99 @@ i915_gem_object_create_stolen(struct drm_device *dev, u32 size) ret = drm_mm_insert_node(&dev_priv->mm.stolen, stolen, size, 4096, DRM_MM_SEARCH_DEFAULT); - if (ret) { - kfree(stolen); - return NULL; + if (ret == 0) + return stolen; + + /* No more stolen memory available, or too fragmented. + * Try evicting purgeable objects and search again. + */ + + drm_mm_init_scan(&dev_priv->mm.stolen, size, 4096, 0); + INIT_LIST_HEAD(&unwind); + + list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) + if (mark_free(obj, &unwind)) + goto found; + + list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) + if (mark_free(obj, &unwind)) + goto found; + +found: + INIT_LIST_HEAD(&evict); + while (!list_empty(&unwind)) { + obj = list_first_entry(&unwind, + struct drm_i915_gem_object, + obj_exec_link); + list_del_init(&obj->obj_exec_link); + + if (drm_mm_scan_remove_block(obj->stolen)) { + list_add(&obj->obj_exec_link, &evict); + drm_gem_object_reference(&obj->base); + } } + ret = 0; + while (!list_empty(&evict)) { + obj = list_first_entry(&evict, + struct drm_i915_gem_object, + obj_exec_link); + list_del_init(&obj->obj_exec_link); + + if (ret == 0) { + struct i915_vma *vma, *vma_next; + + list_for_each_entry_safe(vma, vma_next, + &obj->vma_list, + vma_link) + if (i915_vma_unbind(vma)) + break; + + /* Stolen pins its pages to prevent the + * normal shrinker from processing stolen + * objects. + */ + i915_gem_object_unpin_pages(obj); + + ret = i915_gem_object_put_pages(obj); + if (ret == 0) { + i915_gem_object_release_stolen(obj); + obj->madv = __I915_MADV_PURGED; + } else + i915_gem_object_pin_pages(obj); + } + + drm_gem_object_unreference(&obj->base); + } + + if (ret == 0) + ret = drm_mm_insert_node(&dev_priv->mm.stolen, stolen, size, + 4096, DRM_MM_SEARCH_DEFAULT); + if (ret == 0) + return stolen; + + kfree(stolen); + return NULL; +} + +struct drm_i915_gem_object * +i915_gem_object_create_stolen(struct drm_device *dev, u32 size) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + struct drm_i915_gem_object *obj; + struct drm_mm_node *stolen; + + lockdep_assert_held(&dev->struct_mutex); + + if (!drm_mm_initialized(&dev_priv->mm.stolen)) + return NULL; + + DRM_DEBUG_KMS("creating stolen object: size=%x\n", size); + + stolen = stolen_alloc(dev_priv, size); + if (stolen == NULL) + return NULL; + obj = _i915_gem_object_create_stolen(dev, stolen); if (obj) return obj;