From patchwork Thu Mar 27 15:28:28 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: arun.siluvery@linux.intel.com X-Patchwork-Id: 3898341 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 15A879F2E8 for ; Thu, 27 Mar 2014 15:36:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1EAF92024D for ; Thu, 27 Mar 2014 15:36:08 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 26EE120237 for ; Thu, 27 Mar 2014 15:36:07 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8D00D6E555; Thu, 27 Mar 2014 08:36:06 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTP id 62F2B6E542 for ; Thu, 27 Mar 2014 08:36:04 -0700 (PDT) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 27 Mar 2014 08:29:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,743,1389772800"; d="scan'208";a="508284864" Received: from asiluver-linux.iwi.intel.com ([172.28.253.147]) by fmsmga002.fm.intel.com with ESMTP; 27 Mar 2014 08:29:08 -0700 From: arun.siluvery@linux.intel.com To: intel-gfx@lists.freedesktop.org Date: Thu, 27 Mar 2014 15:28:28 +0000 Message-Id: <1395934109-28522-3-git-send-email-arun.siluvery@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1395934109-28522-1-git-send-email-arun.siluvery@linux.intel.com> References: <1395934109-28522-1-git-send-email-arun.siluvery@linux.intel.com> Subject: [Intel-gfx] [RFC 2/3] drm/i915: Handle gem object resize using scratch page for lazy allocation X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Siluvery, Arun" GEM object size is fixed and it is tightly coupled in i915. To make it resizeable a lazy allocation approach is used. Out of the total size of object, backing store is allocated partially. The scatter/gather entries for the remaining pages point to scratch page. A stop marker denotes the end of real pages. For the mipmaps usecase there ever will be only one resize request hence one value is enough to track single range. The dummy entries are updated with real pages when it is resized. Change-Id: I645a0f9817f43bd127d038d9c17cba8466b7ba6e Signed-off-by: Siluvery, Arun --- drivers/gpu/drm/i915/i915_gem.c | 54 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 53 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 71d7526..d045eee 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -313,6 +313,7 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data, { int ret; int total_page_count; + int base_page_count, scratch_page_count; struct drm_i915_gem_object *obj; struct drm_i915_gem_create *args = data; @@ -340,7 +341,35 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data, obj->gem_resize.stop = total_page_count; obj->gem_resize.scratch_page = NULL; - return 0; + ret = i915_mutex_lock_interruptible(dev); + if (ret) + return ret; + + base_page_count = args->base_size / PAGE_SIZE; + if (base_page_count > total_page_count) { + DRM_DEBUG_DRIVER("invalid object size, base_size(%d) > total_size(%d)\n", + base_page_count, total_page_count); + goto unlock; + } + obj->gem_resize.base_size = args->base_size; + scratch_page_count = total_page_count - base_page_count; + /* allocate backing store only for base size */ + obj->gem_resize.stop = base_page_count; + + obj->gem_resize.scratch_page = alloc_page(GFP_KERNEL | GFP_DMA32 | __GFP_ZERO); + if (obj->gem_resize.scratch_page == NULL) { + DRM_DEBUG_DRIVER("No memory to allocate scratch page\n"); + ret = -ENOMEM; + goto unlock; + } + DRM_DEBUG_DRIVER("scratch page created 0x%p, base(%d) + scratch(%d) = total(%d)\n", + obj->gem_resize.scratch_page, base_page_count, + scratch_page_count, total_page_count); + +unlock: + mutex_unlock(&dev->struct_mutex); + + return ret; } static inline int @@ -1911,6 +1940,11 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) struct page *page; unsigned long last_pfn = 0; /* suppress gcc warning */ gfp_t gfp; + int j; + bool allow_resize = false; + struct page *scratch_page = NULL; + int scratch_page_count = 0; + uint32_t stop; /* Assert that the object is not currently in any GPU domain. As it * wasn't in the GTT, there shouldn't be any way it could have been in @@ -1929,6 +1963,15 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) return -ENOMEM; } + stop = obj->gem_resize.stop; + if (stop && (stop < page_count) && obj->gem_resize.scratch_page) { + DRM_DEBUG_DRIVER("allow this object to resize later\n"); + allow_resize = true; + scratch_page_count = page_count - obj->gem_resize.stop; + page_count = obj->gem_resize.stop; + scratch_page = obj->gem_resize.scratch_page; + } + /* Get the list of pages out of our struct file. They'll be pinned * at this point until we release them. * @@ -1983,6 +2026,15 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) /* Check that the i965g/gm workaround works. */ WARN_ON((gfp & __GFP_DMA32) && (last_pfn >= 0x00100000UL)); } + + if (allow_resize == true) { + for (j = 0; j < scratch_page_count; ++j) { + st->nents++; + sg = sg_next(sg); + sg_set_page(sg, scratch_page, PAGE_SIZE, 0); + } + } + #ifdef CONFIG_SWIOTLB if (!swiotlb_nr_tbl()) #endif