From patchwork Tue Mar 1 16:33:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Gordon X-Patchwork-Id: 8466741 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3068C9F2F0 for ; Tue, 1 Mar 2016 16:34:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F2F9820251 for ; Tue, 1 Mar 2016 16:34:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id CA565201FE for ; Tue, 1 Mar 2016 16:34:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3D85A6E6DB; Tue, 1 Mar 2016 16:34:27 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTP id 5C8F86E6D5 for ; Tue, 1 Mar 2016 16:34:24 +0000 (UTC) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP; 01 Mar 2016 08:34:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,523,1449561600"; d="scan'208";a="661916514" Received: from dsgordon-linux2.isw.intel.com ([10.102.226.88]) by FMSMGA003.fm.intel.com with ESMTP; 01 Mar 2016 08:34:22 -0800 From: Dave Gordon To: intel-gfx@lists.freedesktop.org Date: Tue, 1 Mar 2016 16:33:56 +0000 Message-Id: <1456850039-25856-5-git-send-email-david.s.gordon@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1456850039-25856-1-git-send-email-david.s.gordon@intel.com> References: <1456850039-25856-1-git-send-email-david.s.gordon@intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Subject: [Intel-gfx] [PATCH v7 4/7] drm/i915: introduce and use i915_gem_object_vmap_range() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Alex Dai There are several places inside driver where a GEM object is mapped to kernel virtual space. The mapping may be done either for the whole object or only a subset of it. This patch introduces a function i915_gem_object_vmap_range() to implement the common functionality. The code itself is extracted and adapted from that in vmap_batch(), but also replaces vmap_obj() and the open-coded version in i915_gem_dmabuf_vmap(). v2: use obj->pages->nents for iteration within i915_gem_object_vmap; break when it finishes all desired pages. The caller must pass the actual page count required. [Tvrtko Ursulin] v4: renamed to i915_gem_object_vmap_range() to make its function clearer. [Dave Gordon] v5: use Chris Wilson's new drm_malloc_gfp() rather than kmalloc() or drm_malloc_ab(). [Dave Gordon] v6: changed range checking to not use pages->nents. [Tvrtko Ursulin] Use sg_nents_for_len() for range check instead. [Dave Gordon] Pass range parameters in bytes rather than pages (both callers were converting from bytes to pages anyway, so this reduces the number of places where the conversion is done). v7: changed range parameters back to pages, and simplified parameter validation. [Tvrtko Ursulin] As a convenience for callers, allow npages==0 as a shorthand for "up to the end of the object". With this change, we have only one vmap() in the whole driver :) Signed-off-by: Alex Dai Signed-off-by: Dave Gordon Cc: Tvrtko Ursulin Cc: Chris Wilson Reviewed-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_cmd_parser.c | 34 +++---------------- drivers/gpu/drm/i915/i915_drv.h | 4 +++ drivers/gpu/drm/i915/i915_gem.c | 59 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_gem_dmabuf.c | 15 ++------- drivers/gpu/drm/i915/intel_ringbuffer.c | 23 +------------ 5 files changed, 71 insertions(+), 64 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c index 814d894..4d48617 100644 --- a/drivers/gpu/drm/i915/i915_cmd_parser.c +++ b/drivers/gpu/drm/i915/i915_cmd_parser.c @@ -863,37 +863,13 @@ void i915_cmd_parser_fini_ring(struct intel_engine_cs *ring) static u32 *vmap_batch(struct drm_i915_gem_object *obj, unsigned start, unsigned len) { - int i; - void *addr = NULL; - struct sg_page_iter sg_iter; - int first_page = start >> PAGE_SHIFT; - int last_page = (len + start + 4095) >> PAGE_SHIFT; - int npages = last_page - first_page; - struct page **pages; - - pages = drm_malloc_ab(npages, sizeof(*pages)); - if (pages == NULL) { - DRM_DEBUG_DRIVER("Failed to get space for pages\n"); - goto finish; - } - - i = 0; - for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, first_page) { - pages[i++] = sg_page_iter_page(&sg_iter); - if (i == npages) - break; - } + unsigned long first, npages; - addr = vmap(pages, i, 0, PAGE_KERNEL); - if (addr == NULL) { - DRM_DEBUG_DRIVER("Failed to vmap pages\n"); - goto finish; - } + /* Convert [start, len) to pages */ + first = start >> PAGE_SHIFT; + npages = DIV_ROUND_UP(start + len, PAGE_SIZE) - first; -finish: - if (pages) - drm_free_large(pages); - return (u32*)addr; + return i915_gem_object_vmap_range(obj, first, npages); } /* Returns a vmap'd pointer to dest_obj, which the caller must unmap */ diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index a4dcb74..b3ae191 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2983,6 +2983,10 @@ static inline void i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj) obj->pages_pin_count--; } +void *__must_check i915_gem_object_vmap_range(struct drm_i915_gem_object *obj, + unsigned long first, + unsigned long npages); + int __must_check i915_mutex_lock_interruptible(struct drm_device *dev); int i915_gem_object_sync(struct drm_i915_gem_object *obj, struct intel_engine_cs *to, diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 3d31d3a..d7c9ccd 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2400,6 +2400,65 @@ static void i915_gem_object_free_mmap_offset(struct drm_i915_gem_object *obj) return 0; } +/** + * i915_gem_object_vmap_range - map some or all of a GEM object into kernel space + * @obj: the GEM object to be mapped + * @first: offset in pages of the start of the range to be mapped + * @npages: length in pages of the range to be mapped. For convenience, a + * length of zero is taken to mean "the remainder of the object" + * + * Map a given range of a GEM object into kernel virtual space. The caller must + * make sure the associated pages are gathered and pinned before calling this + * function, and is responsible for unmapping the returned address when it is no + * longer required. + * + * Returns the address at which the object has been mapped, or NULL on failure. + */ +void *i915_gem_object_vmap_range(struct drm_i915_gem_object *obj, + unsigned long first, + unsigned long npages) +{ + unsigned long max_pages = obj->base.size >> PAGE_SHIFT; + struct scatterlist *sg = obj->pages->sgl; + struct sg_page_iter sg_iter; + struct page **pages; + unsigned long i = 0; + void *addr = NULL; + + /* Minimal range check */ + if (first + npages > max_pages) { + DRM_DEBUG_DRIVER("Invalid page range\n"); + return NULL; + } + + /* npages==0 is shorthand for "the rest of the object" */ + if (npages == 0) + npages = max_pages - first; + + pages = drm_malloc_gfp(npages, sizeof(*pages), GFP_TEMPORARY); + if (pages == NULL) { + DRM_DEBUG_DRIVER("Failed to get space for pages\n"); + return NULL; + } + + for_each_sg_page(sg, &sg_iter, max_pages, first) { + pages[i] = sg_page_iter_page(&sg_iter); + if (++i == npages) { + addr = vmap(pages, npages, 0, PAGE_KERNEL); + break; + } + } + + /* We should have got here via the 'break' above */ + WARN_ON(i != npages); + if (addr == NULL) + DRM_DEBUG_DRIVER("Failed to vmap pages\n"); + + drm_free_large(pages); + + return addr; +} + void i915_vma_move_to_active(struct i915_vma *vma, struct drm_i915_gem_request *req) { diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c index 616f078..3a5d01a 100644 --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c @@ -108,9 +108,7 @@ static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf) { struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); struct drm_device *dev = obj->base.dev; - struct sg_page_iter sg_iter; - struct page **pages; - int ret, i; + int ret; ret = i915_mutex_lock_interruptible(dev); if (ret) @@ -129,16 +127,7 @@ static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf) ret = -ENOMEM; - pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages)); - if (pages == NULL) - goto err_unpin; - - i = 0; - for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) - pages[i++] = sg_page_iter_page(&sg_iter); - - obj->dma_buf_vmapping = vmap(pages, i, 0, PAGE_KERNEL); - drm_free_large(pages); + obj->dma_buf_vmapping = i915_gem_object_vmap_range(obj, 0, 0); if (!obj->dma_buf_vmapping) goto err_unpin; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 8f52556..58a18e1 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -2064,27 +2064,6 @@ void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf) i915_gem_object_ggtt_unpin(ringbuf->obj); } -static u32 *vmap_obj(struct drm_i915_gem_object *obj) -{ - struct sg_page_iter sg_iter; - struct page **pages; - void *addr; - int i; - - pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages)); - if (pages == NULL) - return NULL; - - i = 0; - for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) - pages[i++] = sg_page_iter_page(&sg_iter); - - addr = vmap(pages, i, 0, PAGE_KERNEL); - drm_free_large(pages); - - return addr; -} - int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev, struct intel_ringbuffer *ringbuf) { @@ -2101,7 +2080,7 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev, if (ret) goto unpin; - ringbuf->virtual_start = vmap_obj(obj); + ringbuf->virtual_start = i915_gem_object_vmap_range(obj, 0, 0); if (ringbuf->virtual_start == NULL) { ret = -ENOMEM; goto unpin;