From patchwork Wed Aug 7 17:41:24 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 2840438 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C41C4BF535 for ; Wed, 7 Aug 2013 17:52:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CDEF320465 for ; Wed, 7 Aug 2013 17:52:46 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id CD7A120464 for ; Wed, 7 Aug 2013 17:52:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A7B0BE7153 for ; Wed, 7 Aug 2013 10:52:45 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-qc0-f180.google.com (mail-qc0-f180.google.com [209.85.216.180]) by gabe.freedesktop.org (Postfix) with ESMTP id 14ADCE67AB for ; Wed, 7 Aug 2013 10:42:23 -0700 (PDT) Received: by mail-qc0-f180.google.com with SMTP id j10so1047161qcx.25 for ; Wed, 07 Aug 2013 10:42:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+VPeVvRou47djFau6tfN2NXtpOSyV1rUYM2kCrFjuv8=; b=A1FJBSWxYcrzhqLiP5/KQrG4A0Tk3z3H0dmbN6RPVX1RAYiCYb9s3eLJQaqsALKolo kBii3sFDGVnPLILipaggQXPYudRRW3MCdcS7r1KkgVFfkFRu80Nw7EWSg1fYHJqo6Tbw SG1/ByC777dVsQ+2KYJ1mfl6kQh+UmYXOKZRFITxC4XPAUhFiQDevdOjefqICjvRoSOn TGG4FJQIzbf/wZmu9Boc5qsG87tN8PfDkhJ6SlVVDUBhSiKph43zPSn8iH61YPu/RAuF JbfA60/6Jmgs/9JGFGYADAMtQd0UXsKaq+iNUV3FJvtUkxha1TTC+VA7DtlpN36Pscil 9rug== X-Received: by 10.229.121.203 with SMTP id i11mr442372qcr.50.1375897342616; Wed, 07 Aug 2013 10:42:22 -0700 (PDT) Received: from localhost (pool-108-20-246-35.bstnma.east.verizon.net. [108.20.246.35]) by mx.google.com with ESMTPSA id w2sm10369279qec.8.2013.08.07.10.42.17 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 07 Aug 2013 10:42:18 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Subject: [PATCH 6/9] drm/gem: add shmem get/put page helpers Date: Wed, 7 Aug 2013 13:41:24 -0400 Message-Id: <1375897287-8787-7-git-send-email-robdclark@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1375897287-8787-1-git-send-email-robdclark@gmail.com> References: <1375897287-8787-1-git-send-email-robdclark@gmail.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Basically just extracting some code duplicated in gma500, omapdrm, udl, and upcoming msm driver. Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gem.c | 103 ++++++++++++++++++++++++++++++++++++++++++++++ include/drm/drmP.h | 4 ++ 2 files changed, 107 insertions(+) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 84d59f7..4355e3e 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -344,6 +344,109 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj) } EXPORT_SYMBOL(drm_gem_create_mmap_offset); +/** + * drm_gem_get_pages - helper to allocate backing pages for a GEM object + * from shmem + * @obj: obj in question + * @gfpmask: gfp mask of requested pages + */ +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{ + struct inode *inode; + struct address_space *mapping; + struct page *p, **pages; + int i, npages; + + /* This is the shared memory object that backs the GEM resource */ + inode = file_inode(obj->filp); + mapping = inode->i_mapping; + + /* We already BUG_ON() for non-page-aligned sizes in + * drm_gem_object_init(), so we should never hit this unless + * driver author is doing something really wrong: + */ + WARN_ON((obj->size & (PAGE_SIZE - 1)) != 0); + + npages = obj->size >> PAGE_SHIFT; + + pages = drm_malloc_ab(npages, sizeof(struct page *)); + if (pages == NULL) + return ERR_PTR(-ENOMEM); + + gfpmask |= mapping_gfp_mask(mapping); + + for (i = 0; i < npages; i++) { + p = shmem_read_mapping_page_gfp(mapping, i, gfpmask); + if (IS_ERR(p)) + goto fail; + pages[i] = p; + + /* There is a hypothetical issue w/ drivers that require + * buffer memory in the low 4GB.. if the pages are un- + * pinned, and swapped out, they can end up swapped back + * in above 4GB. If pages are already in memory, then + * shmem_read_mapping_page_gfp will ignore the gfpmask, + * even if the already in-memory page disobeys the mask. + * + * It is only a theoretical issue today, because none of + * the devices with this limitation can be populated with + * enough memory to trigger the issue. But this BUG_ON() + * is here as a reminder in case the problem with + * shmem_read_mapping_page_gfp() isn't solved by the time + * it does become a real issue. + * + * See this thread: http://lkml.org/lkml/2011/7/11/238 + */ + BUG_ON((gfpmask & __GFP_DMA32) && + (page_to_pfn(p) >= 0x00100000UL)); + } + + return pages; + +fail: + while (i--) + page_cache_release(pages[i]); + + drm_free_large(pages); + return ERR_CAST(p); +} +EXPORT_SYMBOL(drm_gem_get_pages); + +/** + * drm_gem_put_pages - helper to free backing pages for a GEM object + * @obj: obj in question + * @pages: pages to free + * @dirty: if true, pages will be marked as dirty + * @accessed: if true, the pages will be marked as accessed + */ +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, + bool dirty, bool accessed) +{ + int i, npages; + + /* We already BUG_ON() for non-page-aligned sizes in + * drm_gem_object_init(), so we should never hit this unless + * driver author is doing something really wrong: + */ + WARN_ON((obj->size & (PAGE_SIZE - 1)) != 0); + + npages = obj->size >> PAGE_SHIFT; + + for (i = 0; i < npages; i++) { + if (dirty) + set_page_dirty(pages[i]); + + if (accessed) + mark_page_accessed(pages[i]); + + /* Undo the reference we took when populating the table */ + page_cache_release(pages[i]); + } + + drm_free_large(pages); +} +EXPORT_SYMBOL(drm_gem_put_pages); + /** Returns a reference to the object named by the handle. */ struct drm_gem_object * drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp, diff --git a/include/drm/drmP.h b/include/drm/drmP.h index d00eb89..0045195 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -1670,6 +1670,10 @@ void drm_gem_free_mmap_offset(struct drm_gem_object *obj); int drm_gem_create_mmap_offset(struct drm_gem_object *obj); int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size); +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask); +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, + bool dirty, bool accessed); + struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp, u32 handle);