From patchwork Mon Jun 30 21:46:13 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 4454701 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4CF899F3FF for ; Mon, 30 Jun 2014 21:46:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E9AB2203DF for ; Mon, 30 Jun 2014 21:46:36 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 836972021A for ; Mon, 30 Jun 2014 21:46:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D10976E0D6; Mon, 30 Jun 2014 14:46:33 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-qc0-f182.google.com (mail-qc0-f182.google.com [209.85.216.182]) by gabe.freedesktop.org (Postfix) with ESMTP id BA0D86E0D6 for ; Mon, 30 Jun 2014 14:46:32 -0700 (PDT) Received: by mail-qc0-f182.google.com with SMTP id m20so7546209qcx.41 for ; Mon, 30 Jun 2014 14:46:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:mime-version:content-type :content-transfer-encoding; bh=cGWW3bzmofk2VCf1MtxMDRlhXee8Mar6feKJxP0UAAo=; b=CpBochZAs4wL6SI1ShclaoNQNc0FdDx0atLu2cEHa+QgswCXrZtpy0KvMQ1NNhKtDJ h6BaB+BWQiGJfkWUegfEXzsCB1U6KDHzHEd4W9MBjx9nRUvCsCTuRjSaHGur5GfZhutH pwb0cHuizu259s7He8YKkfCvrcqDJfUufwnvMcSu8tq8ZXvbD4tvI9Ip6FkKPIFaXkkd BbolGBmazIcKOlS/DfDNz10wuU9coToxpsO6K1ygEhu3ifDMS6Wdd5K9XOS4lFlHh/4F AqAn/0hapWGhjjzi9iKpLeoPo+L2UU7F7TTB9UhHXQ9gUzEtn8i7X7w9Wq7MAWvlgOO/ /RUA== X-Received: by 10.224.129.68 with SMTP id n4mr65798212qas.66.1404164792193; Mon, 30 Jun 2014 14:46:32 -0700 (PDT) Received: from localhost.boston.devel.redhat.com (nat-pool-bos-t.redhat.com. [66.187.233.206]) by mx.google.com with ESMTPSA id p15sm33959802qar.34.2014.06.30.14.46.30 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Jun 2014 14:46:31 -0700 (PDT) From: j.glisse@gmail.com To: dri-devel@lists.freedesktop.org Subject: [PATCH] exynos: Put a stop to the userptr heresy. Date: Mon, 30 Jun 2014 17:46:13 -0400 Message-Id: <1404164773-14759-1-git-send-email-j.glisse@gmail.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Cc: kyungmin.park@samsung.com, Jerome Glisse X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jerome Glisse get_user_pages gives no garanty that page it returns are still the one backing the vma by the time it returns. Thus any ioctl that rely on this behavior is broken and rely on pure luck. To avoid any false hope from userspace stop such useage by simply flat out returning -EFAULT. Better to have a reliable behavior than to depend on pure luck and currently observed behavior of mm code. Note this was not even compile tested but i think i did update all places. Signed-off-by: Jérôme Glisse --- drivers/gpu/drm/exynos/exynos_drm_drv.h | 1 - drivers/gpu/drm/exynos/exynos_drm_g2d.c | 277 +------------------------------- drivers/gpu/drm/exynos/exynos_drm_gem.c | 60 ------- drivers/gpu/drm/exynos/exynos_drm_gem.h | 20 --- 4 files changed, 3 insertions(+), 355 deletions(-) diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.h b/drivers/gpu/drm/exynos/exynos_drm_drv.h index 36535f3..7b55e89 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_drv.h +++ b/drivers/gpu/drm/exynos/exynos_drm_drv.h @@ -233,7 +233,6 @@ struct exynos_drm_g2d_private { struct device *dev; struct list_head inuse_cmdlist; struct list_head event_list; - struct list_head userptr_list; }; struct exynos_drm_ipp_private { diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c index 8001587..d0be6dc 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c +++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c @@ -118,9 +118,6 @@ #define G2D_CMDLIST_POOL_SIZE (G2D_CMDLIST_SIZE * G2D_CMDLIST_NUM) #define G2D_CMDLIST_DATA_NUM (G2D_CMDLIST_SIZE / sizeof(u32) - 2) -/* maximum buffer pool size of userptr is 64MB as default */ -#define MAX_POOL (64 * 1024 * 1024) - enum { BUF_TYPE_GEM = 1, BUF_TYPE_USERPTR, @@ -185,19 +182,6 @@ struct drm_exynos_pending_g2d_event { struct drm_exynos_g2d_event event; }; -struct g2d_cmdlist_userptr { - struct list_head list; - dma_addr_t dma_addr; - unsigned long userptr; - unsigned long size; - struct page **pages; - unsigned int npages; - struct sg_table *sgt; - struct vm_area_struct *vma; - atomic_t refcount; - bool in_pool; - bool out_of_list; -}; struct g2d_cmdlist_node { struct list_head list; struct g2d_cmdlist *cmdlist; @@ -242,7 +226,6 @@ struct g2d_data { struct kmem_cache *runqueue_slab; unsigned long current_pool; - unsigned long max_pool; }; static int g2d_init_cmdlist(struct g2d_data *g2d) @@ -354,232 +337,6 @@ add_to_list: list_add_tail(&node->event->base.link, &g2d_priv->event_list); } -static void g2d_userptr_put_dma_addr(struct drm_device *drm_dev, - unsigned long obj, - bool force) -{ - struct g2d_cmdlist_userptr *g2d_userptr = - (struct g2d_cmdlist_userptr *)obj; - - if (!obj) - return; - - if (force) - goto out; - - atomic_dec(&g2d_userptr->refcount); - - if (atomic_read(&g2d_userptr->refcount) > 0) - return; - - if (g2d_userptr->in_pool) - return; - -out: - exynos_gem_unmap_sgt_from_dma(drm_dev, g2d_userptr->sgt, - DMA_BIDIRECTIONAL); - - exynos_gem_put_pages_to_userptr(g2d_userptr->pages, - g2d_userptr->npages, - g2d_userptr->vma); - - exynos_gem_put_vma(g2d_userptr->vma); - - if (!g2d_userptr->out_of_list) - list_del_init(&g2d_userptr->list); - - sg_free_table(g2d_userptr->sgt); - kfree(g2d_userptr->sgt); - - drm_free_large(g2d_userptr->pages); - kfree(g2d_userptr); -} - -static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev, - unsigned long userptr, - unsigned long size, - struct drm_file *filp, - unsigned long *obj) -{ - struct drm_exynos_file_private *file_priv = filp->driver_priv; - struct exynos_drm_g2d_private *g2d_priv = file_priv->g2d_priv; - struct g2d_cmdlist_userptr *g2d_userptr; - struct g2d_data *g2d; - struct page **pages; - struct sg_table *sgt; - struct vm_area_struct *vma; - unsigned long start, end; - unsigned int npages, offset; - int ret; - - if (!size) { - DRM_ERROR("invalid userptr size.\n"); - return ERR_PTR(-EINVAL); - } - - g2d = dev_get_drvdata(g2d_priv->dev); - - /* check if userptr already exists in userptr_list. */ - list_for_each_entry(g2d_userptr, &g2d_priv->userptr_list, list) { - if (g2d_userptr->userptr == userptr) { - /* - * also check size because there could be same address - * and different size. - */ - if (g2d_userptr->size == size) { - atomic_inc(&g2d_userptr->refcount); - *obj = (unsigned long)g2d_userptr; - - return &g2d_userptr->dma_addr; - } - - /* - * at this moment, maybe g2d dma is accessing this - * g2d_userptr memory region so just remove this - * g2d_userptr object from userptr_list not to be - * referred again and also except it the userptr - * pool to be released after the dma access completion. - */ - g2d_userptr->out_of_list = true; - g2d_userptr->in_pool = false; - list_del_init(&g2d_userptr->list); - - break; - } - } - - g2d_userptr = kzalloc(sizeof(*g2d_userptr), GFP_KERNEL); - if (!g2d_userptr) - return ERR_PTR(-ENOMEM); - - atomic_set(&g2d_userptr->refcount, 1); - - start = userptr & PAGE_MASK; - offset = userptr & ~PAGE_MASK; - end = PAGE_ALIGN(userptr + size); - npages = (end - start) >> PAGE_SHIFT; - g2d_userptr->npages = npages; - - pages = drm_calloc_large(npages, sizeof(struct page *)); - if (!pages) { - DRM_ERROR("failed to allocate pages.\n"); - ret = -ENOMEM; - goto err_free; - } - - down_read(¤t->mm->mmap_sem); - vma = find_vma(current->mm, userptr); - if (!vma) { - up_read(¤t->mm->mmap_sem); - DRM_ERROR("failed to get vm region.\n"); - ret = -EFAULT; - goto err_free_pages; - } - - if (vma->vm_end < userptr + size) { - up_read(¤t->mm->mmap_sem); - DRM_ERROR("vma is too small.\n"); - ret = -EFAULT; - goto err_free_pages; - } - - g2d_userptr->vma = exynos_gem_get_vma(vma); - if (!g2d_userptr->vma) { - up_read(¤t->mm->mmap_sem); - DRM_ERROR("failed to copy vma.\n"); - ret = -ENOMEM; - goto err_free_pages; - } - - g2d_userptr->size = size; - - ret = exynos_gem_get_pages_from_userptr(start & PAGE_MASK, - npages, pages, vma); - if (ret < 0) { - up_read(¤t->mm->mmap_sem); - DRM_ERROR("failed to get user pages from userptr.\n"); - goto err_put_vma; - } - - up_read(¤t->mm->mmap_sem); - g2d_userptr->pages = pages; - - sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); - if (!sgt) { - ret = -ENOMEM; - goto err_free_userptr; - } - - ret = sg_alloc_table_from_pages(sgt, pages, npages, offset, - size, GFP_KERNEL); - if (ret < 0) { - DRM_ERROR("failed to get sgt from pages.\n"); - goto err_free_sgt; - } - - g2d_userptr->sgt = sgt; - - ret = exynos_gem_map_sgt_with_dma(drm_dev, g2d_userptr->sgt, - DMA_BIDIRECTIONAL); - if (ret < 0) { - DRM_ERROR("failed to map sgt with dma region.\n"); - goto err_sg_free_table; - } - - g2d_userptr->dma_addr = sgt->sgl[0].dma_address; - g2d_userptr->userptr = userptr; - - list_add_tail(&g2d_userptr->list, &g2d_priv->userptr_list); - - if (g2d->current_pool + (npages << PAGE_SHIFT) < g2d->max_pool) { - g2d->current_pool += npages << PAGE_SHIFT; - g2d_userptr->in_pool = true; - } - - *obj = (unsigned long)g2d_userptr; - - return &g2d_userptr->dma_addr; - -err_sg_free_table: - sg_free_table(sgt); - -err_free_sgt: - kfree(sgt); - -err_free_userptr: - exynos_gem_put_pages_to_userptr(g2d_userptr->pages, - g2d_userptr->npages, - g2d_userptr->vma); - -err_put_vma: - exynos_gem_put_vma(g2d_userptr->vma); - -err_free_pages: - drm_free_large(pages); - -err_free: - kfree(g2d_userptr); - - return ERR_PTR(ret); -} - -static void g2d_userptr_free_all(struct drm_device *drm_dev, - struct g2d_data *g2d, - struct drm_file *filp) -{ - struct drm_exynos_file_private *file_priv = filp->driver_priv; - struct exynos_drm_g2d_private *g2d_priv = file_priv->g2d_priv; - struct g2d_cmdlist_userptr *g2d_userptr, *n; - - list_for_each_entry_safe(g2d_userptr, n, &g2d_priv->userptr_list, list) - if (g2d_userptr->in_pool) - g2d_userptr_put_dma_addr(drm_dev, - (unsigned long)g2d_userptr, - true); - - g2d->current_pool = 0; -} - static enum g2d_reg_type g2d_get_reg_type(int reg_offset) { enum g2d_reg_type reg_type; @@ -734,29 +491,8 @@ static int g2d_map_cmdlist_gem(struct g2d_data *g2d, goto err; } } else { - struct drm_exynos_g2d_userptr g2d_userptr; - - if (copy_from_user(&g2d_userptr, (void __user *)handle, - sizeof(struct drm_exynos_g2d_userptr))) { - ret = -EFAULT; - goto err; - } - - if (!g2d_check_buf_desc_is_valid(buf_desc, reg_type, - g2d_userptr.size)) { - ret = -EFAULT; - goto err; - } - - addr = g2d_userptr_get_dma_addr(drm_dev, - g2d_userptr.userptr, - g2d_userptr.size, - file, - &handle); - if (IS_ERR(addr)) { - ret = -EFAULT; - goto err; - } + ret = -EFAULT; + goto err; } cmdlist->data[reg_pos + 1] = *addr; @@ -793,8 +529,7 @@ static void g2d_unmap_cmdlist_gem(struct g2d_data *g2d, exynos_drm_gem_put_dma_addr(subdrv->drm_dev, handle, filp); else - g2d_userptr_put_dma_addr(subdrv->drm_dev, handle, - false); + BUG(); buf_info->reg_types[i] = REG_TYPE_NONE; buf_info->handles[reg_type] = 0; @@ -1329,7 +1064,6 @@ static int g2d_open(struct drm_device *drm_dev, struct device *dev, INIT_LIST_HEAD(&g2d_priv->inuse_cmdlist); INIT_LIST_HEAD(&g2d_priv->event_list); - INIT_LIST_HEAD(&g2d_priv->userptr_list); return 0; } @@ -1363,9 +1097,6 @@ static void g2d_close(struct drm_device *drm_dev, struct device *dev, } mutex_unlock(&g2d->cmdlist_mutex); - /* release all g2d_userptr in pool. */ - g2d_userptr_free_all(drm_dev, g2d, file); - kfree(file_priv->g2d_priv); } @@ -1433,8 +1164,6 @@ static int g2d_probe(struct platform_device *pdev) goto err_put_clk; } - g2d->max_pool = MAX_POOL; - platform_set_drvdata(pdev, g2d); subdrv = &g2d->subdrv; diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c index 163a054..cb624d9 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c @@ -496,66 +496,6 @@ void exynos_gem_put_vma(struct vm_area_struct *vma) kfree(vma); } -int exynos_gem_get_pages_from_userptr(unsigned long start, - unsigned int npages, - struct page **pages, - struct vm_area_struct *vma) -{ - int get_npages; - - /* the memory region mmaped with VM_PFNMAP. */ - if (vma_is_io(vma)) { - unsigned int i; - - for (i = 0; i < npages; ++i, start += PAGE_SIZE) { - unsigned long pfn; - int ret = follow_pfn(vma, start, &pfn); - if (ret) - return ret; - - pages[i] = pfn_to_page(pfn); - } - - if (i != npages) { - DRM_ERROR("failed to get user_pages.\n"); - return -EINVAL; - } - - return 0; - } - - get_npages = get_user_pages(current, current->mm, start, - npages, 1, 1, pages, NULL); - get_npages = max(get_npages, 0); - if (get_npages != npages) { - DRM_ERROR("failed to get user_pages.\n"); - while (get_npages) - put_page(pages[--get_npages]); - return -EFAULT; - } - - return 0; -} - -void exynos_gem_put_pages_to_userptr(struct page **pages, - unsigned int npages, - struct vm_area_struct *vma) -{ - if (!vma_is_io(vma)) { - unsigned int i; - - for (i = 0; i < npages; i++) { - set_page_dirty_lock(pages[i]); - - /* - * undo the reference we took when populating - * the table. - */ - put_page(pages[i]); - } - } -} - int exynos_gem_map_sgt_with_dma(struct drm_device *drm_dev, struct sg_table *sgt, enum dma_data_direction dir) diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h index 1592c0b..07c00a3 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h @@ -21,7 +21,6 @@ * exynos drm gem buffer structure. * * @kvaddr: kernel virtual address to allocated memory region. - * *userptr: user space address. * @dma_addr: bus address(accessed by dma) to allocated memory region. * - this address could be physical address without IOMMU and * device address with IOMMU. @@ -29,19 +28,15 @@ * @pages: Array of backing pages. * @sgt: sg table to transfer page data. * @size: size of allocated memory region. - * @pfnmap: indicate whether memory region from userptr is mmaped with - * VM_PFNMAP or not. */ struct exynos_drm_gem_buf { void __iomem *kvaddr; - unsigned long userptr; dma_addr_t dma_addr; struct dma_attrs dma_attrs; unsigned int write; struct page **pages; struct sg_table *sgt; unsigned long size; - bool pfnmap; }; /* @@ -125,10 +120,6 @@ int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data, int exynos_drm_gem_mmap_buffer(struct file *filp, struct vm_area_struct *vma); -/* map user space allocated by malloc to pages. */ -int exynos_drm_gem_userptr_ioctl(struct drm_device *dev, void *data, - struct drm_file *file_priv); - /* get buffer information to memory region allocated by gem. */ int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); @@ -168,17 +159,6 @@ struct vm_area_struct *exynos_gem_get_vma(struct vm_area_struct *vma); /* release a userspace virtual memory area. */ void exynos_gem_put_vma(struct vm_area_struct *vma); -/* get pages from user space. */ -int exynos_gem_get_pages_from_userptr(unsigned long start, - unsigned int npages, - struct page **pages, - struct vm_area_struct *vma); - -/* drop the reference to pages. */ -void exynos_gem_put_pages_to_userptr(struct page **pages, - unsigned int npages, - struct vm_area_struct *vma); - /* map sgt with dma region. */ int exynos_gem_map_sgt_with_dma(struct drm_device *drm_dev, struct sg_table *sgt,