From patchwork Wed Apr 26 21:12:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9702023 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 58EB260245 for ; Wed, 26 Apr 2017 21:12:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 47C002844C for ; Wed, 26 Apr 2017 21:12:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3991928404; Wed, 26 Apr 2017 21:12:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BBF7228404 for ; Wed, 26 Apr 2017 21:12:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0E6696E5FB; Wed, 26 Apr 2017 21:12:50 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-qk0-f176.google.com (mail-qk0-f176.google.com [209.85.220.176]) by gabe.freedesktop.org (Postfix) with ESMTPS id C1BD76E5E9 for ; Wed, 26 Apr 2017 21:12:46 +0000 (UTC) Received: by mail-qk0-f176.google.com with SMTP id f76so12324593qke.2 for ; Wed, 26 Apr 2017 14:12:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CvsL3E0tGVdErzLdy1rXq1scvaz0mD7Vc7XqGFEVf9U=; b=Zp5r8rk/IJQrsfX67SORE52NZlxyBFz5TF1FnxGOyl7xan3GGjo6vppkgD4DDJsIJc gS2Qkbjej392j+a19hKi8TSYb3xbgU2dn/tmAF3gdYYAGXvsLsE2TowRjmWFJKX2551F 3J/Dn/ZdqVe8v5jy8kx6j+qW18plLpizReydD9NCjKo2nXIFDm6ZpV+KVsh1w/31RQRM JPcpq3LNGWIlD23ptSOFxjGdUNvKE1NoF5a3GkS7gLiBEaeSYSc7r3X1G03ztvkQgbvk blAR85LiBueXRlRRx+FI2ZY7RoKE68k0c7k+uKr1JzW7ue/tw1YcDpjFMBxFf9+H3q/z c+Ww== X-Gm-Message-State: AN3rC/53aRzsfR3QK3nFvUSyfmXYqALIZ3KERhrS0VUW/KmpNYI6ti/O FR+3QCXeXJqqNxRP X-Received: by 10.55.3.2 with SMTP id 2mr1755416qkd.97.1493241165825; Wed, 26 Apr 2017 14:12:45 -0700 (PDT) Received: from labbott-redhat-machine.redhat.com ([2601:602:9802:a8dc::736e]) by smtp.gmail.com with ESMTPSA id c21sm290477qkg.64.2017.04.26.14.12.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 26 Apr 2017 14:12:44 -0700 (PDT) From: Laura Abbott To: Daniel Vetter , Chris Wilson , Sean Paul Subject: [RFC PATCHv2 3/3] drm/vgem: Enable dmabuf import interfaces Date: Wed, 26 Apr 2017 14:12:30 -0700 Message-Id: <1493241150-21742-4-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1493241150-21742-1-git-send-email-labbott@redhat.com> References: <1493241150-21742-1-git-send-email-labbott@redhat.com> Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Enable the GEM dma-buf import interfaces in addition to the export interfaces. This lets vgem be used as a test source for other allocators (e.g. Ion). Signed-off-by: Laura Abbott Reviewed-by: Chris Wilson --- v2: Don't require vgem allocated buffers to existing in memory before importing. --- drivers/gpu/drm/vgem/vgem_drv.c | 133 +++++++++++++++++++++++++++++++--------- drivers/gpu/drm/vgem/vgem_drv.h | 2 + 2 files changed, 106 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c index 1b02e56..73a619a 100644 --- a/drivers/gpu/drm/vgem/vgem_drv.c +++ b/drivers/gpu/drm/vgem/vgem_drv.c @@ -34,6 +34,9 @@ #include #include #include + +#include + #include "vgem_drv.h" #define DRIVER_NAME "vgem" @@ -46,6 +49,12 @@ static void vgem_gem_free_object(struct drm_gem_object *obj) { struct drm_vgem_gem_object *vgem_obj = to_vgem_bo(obj); + if (vgem_obj->pages) + drm_free_large(vgem_obj->pages); + + if (obj->import_attach) + drm_prime_gem_destroy(obj, vgem_obj->table); + drm_gem_object_release(obj); kfree(vgem_obj); } @@ -56,26 +65,48 @@ static int vgem_gem_fault(struct vm_fault *vmf) struct drm_vgem_gem_object *obj = vma->vm_private_data; /* We don't use vmf->pgoff since that has the fake offset */ unsigned long vaddr = vmf->address; - struct page *page; - - page = shmem_read_mapping_page(file_inode(obj->base.filp)->i_mapping, - (vaddr - vma->vm_start) >> PAGE_SHIFT); - if (!IS_ERR(page)) { - vmf->page = page; - return 0; - } else switch (PTR_ERR(page)) { - case -ENOSPC: - case -ENOMEM: - return VM_FAULT_OOM; - case -EBUSY: - return VM_FAULT_RETRY; - case -EFAULT: - case -EINVAL: - return VM_FAULT_SIGBUS; - default: - WARN_ON_ONCE(PTR_ERR(page)); - return VM_FAULT_SIGBUS; + int ret; + loff_t num_pages; + pgoff_t page_offset; + page_offset = (vaddr - vma->vm_start) >> PAGE_SHIFT; + + num_pages = DIV_ROUND_UP(obj->base.size, PAGE_SIZE); + + if (page_offset > num_pages) + return VM_FAULT_SIGBUS; + + if (obj->pages) { + get_page(obj->pages[page_offset]); + vmf->page = obj->pages[page_offset]; + ret = 0; + } else { + struct page *page; + + page = shmem_read_mapping_page( + file_inode(obj->base.filp)->i_mapping, + page_offset); + if (!IS_ERR(page)) { + vmf->page = page; + ret = 0; + } else switch (PTR_ERR(page)) { + case -ENOSPC: + case -ENOMEM: + ret = VM_FAULT_OOM; + break; + case -EBUSY: + ret = VM_FAULT_RETRY; + break; + case -EFAULT: + case -EINVAL: + ret = VM_FAULT_SIGBUS; + break; + default: + WARN_ON(PTR_ERR(page)); + ret = VM_FAULT_SIGBUS; + } + } + return ret; } static const struct vm_operations_struct vgem_gem_vm_ops = { @@ -112,12 +143,8 @@ static void vgem_preclose(struct drm_device *dev, struct drm_file *file) kfree(vfile); } -/* ioctls */ - -static struct drm_gem_object *vgem_gem_create(struct drm_device *dev, - struct drm_file *file, - unsigned int *handle, - unsigned long size) +static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev, + unsigned long size) { struct drm_vgem_gem_object *obj; int ret; @@ -127,8 +154,31 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev, return ERR_PTR(-ENOMEM); ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE)); - if (ret) - goto err_free; + if (ret) { + kfree(obj); + return ERR_PTR(ret); + } + + return obj; +} + +static void __vgem_gem_destroy(struct drm_vgem_gem_object *obj) +{ + drm_gem_object_release(&obj->base); + kfree(obj); +} + +static struct drm_gem_object *vgem_gem_create(struct drm_device *dev, + struct drm_file *file, + unsigned int *handle, + unsigned long size) +{ + struct drm_vgem_gem_object *obj; + int ret; + + obj = __vgem_gem_create(dev, size); + if (IS_ERR(obj)) + return ERR_CAST(obj); ret = drm_gem_handle_create(file, &obj->base, handle); drm_gem_object_unreference_unlocked(&obj->base); @@ -137,9 +187,8 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev, return &obj->base; -err_free: - kfree(obj); err: + __vgem_gem_destroy(obj); return ERR_PTR(ret); } @@ -256,6 +305,29 @@ static struct sg_table *vgem_prime_get_sg_table(struct drm_gem_object *obj) return st; } +static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev, + struct dma_buf_attachment *attach, struct sg_table *sg) +{ + struct drm_vgem_gem_object *obj; + int npages; + + obj = __vgem_gem_create(dev, attach->dmabuf->size); + if (IS_ERR(obj)) + return ERR_CAST(obj); + + npages = PAGE_ALIGN(attach->dmabuf->size) / PAGE_SIZE; + + obj->table = sg; + obj->pages = drm_malloc_ab(npages, sizeof(struct page *)); + if (!obj->pages) { + __vgem_gem_destroy(obj); + return ERR_PTR(-ENOMEM); + } + drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL, + npages); + return &obj->base; +} + static void *vgem_prime_vmap(struct drm_gem_object *obj) { long n_pages = obj->size >> PAGE_SHIFT; @@ -314,8 +386,11 @@ static struct drm_driver vgem_driver = { .dumb_map_offset = vgem_gem_dumb_map, .prime_handle_to_fd = drm_gem_prime_handle_to_fd, + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, .gem_prime_pin = vgem_prime_pin, + .gem_prime_import = drm_gem_prime_import_platform, .gem_prime_export = drm_gem_prime_export, + .gem_prime_import_sg_table = vgem_prime_import_sg_table, .gem_prime_get_sg_table = vgem_prime_get_sg_table, .gem_prime_vmap = vgem_prime_vmap, .gem_prime_vunmap = vgem_prime_vunmap, diff --git a/drivers/gpu/drm/vgem/vgem_drv.h b/drivers/gpu/drm/vgem/vgem_drv.h index cb59c7a..1aae014 100644 --- a/drivers/gpu/drm/vgem/vgem_drv.h +++ b/drivers/gpu/drm/vgem/vgem_drv.h @@ -43,6 +43,8 @@ struct vgem_file { #define to_vgem_bo(x) container_of(x, struct drm_vgem_gem_object, base) struct drm_vgem_gem_object { struct drm_gem_object base; + struct page **pages; + struct sg_table *table; }; int vgem_fence_open(struct vgem_file *file);