Message ID | 1416529576-20471-1-git-send-email-zachr@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, Nov 20, 2014 at 04:26:16PM -0800, Zach Reizner wrote: > This patch implements the virtual GEM driver with PRIME sharing which > allows vgem to import a gem object from other drivers for the purpose > of mmap-ing them to userspace. > > Reviewed-by: Stéphane Marchesin <marcheu@chromium.org> > Signed-off-by: Adam Jackson <ajax@redhat.com> > Signed-off-by: Ben Widawsky <ben@bwidawsk.net> > Signed-off-by: Zach Reizner <zachr@google.com> Awesome that someone resurrects this, I still like the idea (after all I've suggested this to Ben ages ago). Bunch of comments below to update the driver to latest drm changes. Whit those address this looks good. Cheers, Daniel > --- > drivers/gpu/drm/Kconfig | 9 + > drivers/gpu/drm/Makefile | 1 + > drivers/gpu/drm/vgem/Makefile | 4 + > drivers/gpu/drm/vgem/vgem_dma_buf.c | 169 +++++++++++++++ > drivers/gpu/drm/vgem/vgem_drv.c | 407 ++++++++++++++++++++++++++++++++++++ > drivers/gpu/drm/vgem/vgem_drv.h | 62 ++++++ > 6 files changed, 652 insertions(+) > create mode 100644 drivers/gpu/drm/vgem/Makefile > create mode 100644 drivers/gpu/drm/vgem/vgem_dma_buf.c > create mode 100644 drivers/gpu/drm/vgem/vgem_drv.c > create mode 100644 drivers/gpu/drm/vgem/vgem_drv.h > > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig > index e3b4b0f..39909bc 100644 > --- a/drivers/gpu/drm/Kconfig > +++ b/drivers/gpu/drm/Kconfig > @@ -165,6 +165,15 @@ config DRM_SAVAGE > Choose this option if you have a Savage3D/4/SuperSavage/Pro/Twister > chipset. If M is selected the module will be called savage. > > +config DRM_VGEM > + tristate "Virtual GEM provider" > + depends on DRM > + help > + Choose this option to get a virtual graphics memory manager, > + as used by Mesa's software renderer for enhanced performance. > + If M is selected the module will be called vgem. > + > + > source "drivers/gpu/drm/exynos/Kconfig" > > source "drivers/gpu/drm/vmwgfx/Kconfig" > diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile > index c3cf64c..c1e4a0e 100644 > --- a/drivers/gpu/drm/Makefile > +++ b/drivers/gpu/drm/Makefile > @@ -47,6 +47,7 @@ obj-$(CONFIG_DRM_SIS) += sis/ > obj-$(CONFIG_DRM_SAVAGE)+= savage/ > obj-$(CONFIG_DRM_VMWGFX)+= vmwgfx/ > obj-$(CONFIG_DRM_VIA) +=via/ > +obj-$(CONFIG_DRM_VGEM) += vgem/ > obj-$(CONFIG_DRM_NOUVEAU) +=nouveau/ > obj-$(CONFIG_DRM_EXYNOS) +=exynos/ > obj-$(CONFIG_DRM_GMA500) += gma500/ > diff --git a/drivers/gpu/drm/vgem/Makefile b/drivers/gpu/drm/vgem/Makefile > new file mode 100644 > index 0000000..1055cb7 > --- /dev/null > +++ b/drivers/gpu/drm/vgem/Makefile > @@ -0,0 +1,4 @@ > +ccflags-y := -Iinclude/drm > +vgem-y := vgem_drv.o vgem_dma_buf.o > + > +obj-$(CONFIG_DRM_VGEM) += vgem.o > diff --git a/drivers/gpu/drm/vgem/vgem_dma_buf.c b/drivers/gpu/drm/vgem/vgem_dma_buf.c > new file mode 100644 > index 0000000..8450124 > --- /dev/null > +++ b/drivers/gpu/drm/vgem/vgem_dma_buf.c > @@ -0,0 +1,169 @@ > +/* > + * Copyright © 2012 Intel Corporation > + * Copyright © 2014 The Chromium OS Authors > + * > + * Permission is hereby granted, free of charge, to any person obtaining a > + * copy of this software and associated documentation files (the "Software"), > + * to deal in the Software without restriction, including without limitation > + * the rights to use, copy, modify, merge, publish, distribute, sublicense, > + * and/or sell copies of the Software, and to permit persons to whom the > + * Software is furnished to do so, subject to the following conditions: > + * > + * The above copyright notice and this permission notice (including the next > + * paragraph) shall be included in all copies or substantial portions of the > + * Software. > + * > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL > + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS > + * IN THE SOFTWARE. > + * > + * Authors: > + * Ben Widawsky <ben@bwidawsk.net> > + * > + */ > + > +#include <linux/dma-buf.h> > +#include "vgem_drv.h" > + > +#define VGEM_FD_PERMS 0600 > + > +static struct sg_table *vgem_gem_map_dma_buf(struct dma_buf_attachment *attach, > + enum dma_data_direction dir) > +{ > + struct drm_vgem_gem_object *obj = attach->dmabuf->priv; > + struct sg_table *sg; > + int ret; > + > + ret = vgem_gem_get_pages(obj); > + if (ret) > + return ERR_PTR(ret); > + > + /* VGEM assumes cache coherent access. Normally we might have to flush > + * caches here */ > + > + BUG_ON(obj->pages == NULL); > + > + sg = drm_prime_pages_to_sg(obj->pages, obj->base.size / PAGE_SIZE); > + if (!sg) { > + vgem_gem_put_pages(obj); > + return NULL; > + } > + > + return sg; > +} > + > +static void vgem_gem_unmap_dma_buf(struct dma_buf_attachment *attach, > + struct sg_table *sg, > + enum dma_data_direction data_direction) > +{ > + sg_free_table(sg); > + kfree(sg); > +} > + > +static void *vgem_kmap_atomic_dma_buf(struct dma_buf *dma_buf, > + unsigned long page_num) > +{ > + return NULL; > +} > + > +static void *vgem_kmap_dma_buf(struct dma_buf *dma_buf, > + unsigned long page_num) > +{ > + return NULL; > +} > + > +static int vgem_mmap_dma_buf(struct dma_buf *dma_buf, > + struct vm_area_struct *vma) > +{ > + return -EINVAL; vmap support (used by udl) might be useful. Quick peak at i915 should be sufficient. But just a suggestion, I'm ok if you leave it out. > +} > + > +static struct dma_buf_ops vgem_dmabuf_ops = { > + .map_dma_buf = vgem_gem_map_dma_buf, > + .unmap_dma_buf = vgem_gem_unmap_dma_buf, > + .release = drm_gem_dmabuf_release, > + .kmap_atomic = vgem_kmap_atomic_dma_buf, > + .kmap = vgem_kmap_dma_buf, > + .mmap = vgem_mmap_dma_buf, > +}; > + > +struct dma_buf *vgem_gem_prime_export(struct drm_device *dev, > + struct drm_gem_object *obj, > + int flags) > +{ > + return dma_buf_export(obj, &vgem_dmabuf_ops, obj->size, flags, NULL); > +} > + > +struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev, > + struct dma_buf *dma_buf) > +{ > + struct drm_vgem_gem_object *obj = NULL; > + struct dma_buf_attachment *attach = NULL; > + struct sg_table *sg = NULL; > + int num_pages; > + int ret; > + > + attach = dma_buf_attach(dma_buf, dev->dev); > + if (IS_ERR(attach)) { > + ret = PTR_ERR(attach); > + goto fail; > + } > + > + get_dma_buf(dma_buf); > + > + sg = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); > + if (IS_ERR(sg)) { > + ret = PTR_ERR(sg); > + goto fail_detach; > + } > + > + obj = kzalloc(sizeof(*obj), GFP_KERNEL); > + if (obj == NULL) { > + ret = -ENOMEM; > + goto fail_unmap; > + } > + > + obj->base.import_attach = attach; > + obj->sg = sg; > + > + /* As a result of this mmap will not work -yet- */ > + ret = drm_gem_object_init(dev, &obj->base, dma_buf->size); > + if (ret) { > + ret = -ENOMEM; > + goto fail_free; > + } > + > + num_pages = obj->base.size / PAGE_SIZE; > + > + obj->pages = drm_malloc_ab(num_pages, sizeof(struct page *)); > + if (!obj->pages) { > + ret = -ENOMEM; > + goto fail_gem_release; > + } > + > + ret = drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, num_pages); > + if (ret) { > + ret = -ENOMEM; > + goto fail_free_pages; > + } > + > + return &obj->base; > + > +fail_free_pages: > + drm_free_large(obj->pages); > +fail_gem_release: > + drm_gem_object_release(&obj->base); > +fail_free: > + kfree(obj); > +fail_unmap: > + dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL); > +fail_detach: > + dma_buf_detach(dma_buf, attach); > + dma_buf_put(dma_buf); > +fail: > + return ERR_PTR(ret); > +} General comment: Might be good to look into the gem prime helpers, that should simplify this code quite a bit. The prime helpers are a new addition since Ben wrote this code many aeons ago: http://people.freedesktop.org/~danvet/drm/drm-memory-management.html#drm-prime-support > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c > new file mode 100644 > index 0000000..b0032c9 > --- /dev/null > +++ b/drivers/gpu/drm/vgem/vgem_drv.c > @@ -0,0 +1,407 @@ > +/* > + * Copyright 2011 Red Hat, Inc. > + * Copyright © 2014 The Chromium OS Authors > + * > + * Permission is hereby granted, free of charge, to any person obtaining a > + * copy of this software and associated documentation files (the "Software") > + * to deal in the software without restriction, including without limitation > + * on the rights to use, copy, modify, merge, publish, distribute, sub > + * license, and/or sell copies of the Software, and to permit persons to whom > + * them Software is furnished to do so, subject to the following conditions: > + * > + * The above copyright notice and this permission notice (including the next > + * paragraph) shall be included in all copies or substantial portions of the > + * Software. > + * > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTIBILITY, > + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL > + * THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER > + * IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF OR IN > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. > + * > + * Authors: > + * Adam Jackson <ajax@redhat.com> > + * Ben Widawsky <ben@bwidawsk.net> > + */ > + > +/** > + * This is vgem, a (non-hardware-backed) GEM service. This is used by Mesa's > + * software renderer and the X server for efficient buffer sharing. > + */ > + > +#include <linux/module.h> > +#include <linux/ramfs.h> > +#include <linux/shmem_fs.h> > +#include "vgem_drv.h" > + > +#define DRIVER_NAME "vgem" > +#define DRIVER_DESC "Virtual GEM provider" > +#define DRIVER_DATE "20120112" > +#define DRIVER_MAJOR 1 > +#define DRIVER_MINOR 0 > + > +static int vgem_load(struct drm_device *dev, unsigned long flags) > +{ > + platform_set_drvdata(dev->platformdev, dev); > + return 0; > +} > + > +static int vgem_unload(struct drm_device *dev) > +{ > + return 0; > +} > + > +static int vgem_open(struct drm_device *dev, struct drm_file *file) > +{ > + return 0; > +} > + > +static void vgem_preclose(struct drm_device *dev, struct drm_file *file) > +{ > +} > + > +static void vgem_lastclose(struct drm_device *dev) > +{ > +} If you rework the device load sequence as suggested below your load function is empty. And all 5 of the above callbacks are optional, so you can leave thema ll out. > + > +void vgem_gem_put_pages(struct drm_vgem_gem_object *obj) > +{ > + int num_pages = obj->base.size / PAGE_SIZE; > + int i; > + > + for (i = 0; i < num_pages; i++) { > + if (obj->pages[i] == NULL) > + break; > + page_cache_release(obj->pages[i]); > + } > + > + drm_free_large(obj->pages); > + obj->pages = NULL; > +} > + > +static void vgem_gem_free_object(struct drm_gem_object *obj) > +{ > + struct drm_vgem_gem_object *vgem_obj = to_vgem_bo(obj); > + > + drm_gem_free_mmap_offset(obj); > + > + if (obj->import_attach) { > + drm_prime_gem_destroy(obj, vgem_obj->sg); > + vgem_obj->pages = NULL; > + } > + > + drm_gem_object_release(obj); > + > + if (vgem_obj->pages) > + vgem_gem_put_pages(vgem_obj); > + > + vgem_obj->pages = NULL; > + > + kfree(vgem_obj); > +} > + > +int vgem_gem_get_pages(struct drm_vgem_gem_object *obj) > +{ > + struct address_space *mapping; > + gfp_t gfpmask = GFP_KERNEL; > + int num_pages, i, ret = 0; > + > + num_pages = obj->base.size / PAGE_SIZE; > + > + if (!obj->pages) { > + obj->pages = drm_malloc_ab(num_pages, sizeof(struct page *)); > + if (obj->pages == NULL) > + return -ENOMEM; > + } else { > + return 0; > + } > + > + mapping = obj->base.filp->f_path.dentry->d_inode->i_mapping; > + gfpmask |= mapping_gfp_mask(mapping); > + > + for (i = 0; i < num_pages; i++) { > + struct page *page; > + obj->pages[i] = NULL; > + page = shmem_read_mapping_page_gfp(mapping, i, gfpmask); > + if (IS_ERR(page)) { > + ret = PTR_ERR(page); > + goto err_out; > + } > + obj->pages[i] = page; > + } > + > + return ret; > + > +err_out: > + vgem_gem_put_pages(obj); > + return ret; > +} > + > +static int vgem_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) > +{ > + struct drm_vgem_gem_object *obj = to_vgem_bo(vma->vm_private_data); > + struct drm_device *dev = obj->base.dev; > + loff_t num_pages; > + pgoff_t page_offset; > + int ret; > + > + /* We don't use vmf->pgoff since that has the fake offset */ > + page_offset = ((unsigned long)vmf->virtual_address - vma->vm_start) >> > + PAGE_SHIFT; > + > + num_pages = DIV_ROUND_UP(obj->base.size, PAGE_SIZE); > + > + if (page_offset > num_pages) > + return VM_FAULT_SIGBUS; > + > + mutex_lock(&dev->struct_mutex); > + > + ret = vm_insert_page(vma, (unsigned long)vmf->virtual_address, > + obj->pages[page_offset]); > + > + mutex_unlock(&dev->struct_mutex); > + switch (ret) { > + case 0: > + return VM_FAULT_NOPAGE; > + case -ENOMEM: > + return VM_FAULT_OOM; > + case -EBUSY: > + return VM_FAULT_RETRY; > + case -EFAULT: > + case -EINVAL: > + return VM_FAULT_SIGBUS; > + default: > + WARN_ON(1); > + return VM_FAULT_SIGBUS; > + } > +} > + > +static struct vm_operations_struct vgem_gem_vm_ops = { > + .fault = vgem_gem_fault, > + .open = drm_gem_vm_open, > + .close = drm_gem_vm_close, > +}; > + > +/* ioctls */ > + > +static struct drm_gem_object *vgem_gem_create(struct drm_device *dev, > + struct drm_file *file, > + unsigned int *handle, > + unsigned long size) > +{ > + struct drm_vgem_gem_object *obj; > + struct drm_gem_object *gem_object; > + int err; > + > + size = roundup(size, PAGE_SIZE); > + > + obj = kzalloc(sizeof(*obj), GFP_KERNEL); > + if (!obj) > + return ERR_PTR(-ENOMEM); > + > + gem_object = &obj->base; > + > + err = drm_gem_object_init(dev, gem_object, size); > + if (err) > + goto out; > + > + err = drm_gem_handle_create(file, gem_object, handle); > + if (err) > + goto handle_out; > + > + drm_gem_object_unreference_unlocked(gem_object); > + > + return gem_object; > + > +handle_out: > + drm_gem_object_release(gem_object); > +out: > + kfree(obj); > + return ERR_PTR(err); > +} > + > +static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev, > + struct drm_mode_create_dumb *args) > +{ > + struct drm_gem_object *gem_object; > + uint64_t size; > + > + size = args->height * args->width * DIV_ROUND_UP(args->bpp, 8); > + if (size == 0) > + return -EINVAL; > + > + gem_object = vgem_gem_create(dev, file, &args->handle, size); > + > + if (IS_ERR(gem_object)) { > + DRM_DEBUG_DRIVER("object creation failed\n"); > + return PTR_ERR(gem_object); > + } > + > + args->size = gem_object->size; > + args->pitch = args->width; > + > + DRM_DEBUG_DRIVER("Created object of size %lld\n", size); > + > + return 0; > +} > + > +int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev, > + uint32_t handle, uint64_t *offset) > +{ > + int ret = 0; > + struct drm_gem_object *obj; > + > + mutex_lock(&dev->struct_mutex); > + obj = drm_gem_object_lookup(dev, file, handle); > + if (!obj) { > + ret = -ENOENT; > + goto unlock; > + } > + > + if (!drm_vma_node_has_offset(&obj->vma_node)) { > + ret = drm_gem_create_mmap_offset(obj); > + if (ret) > + goto unref; > + } > + > + BUG_ON(!obj->filp); > + > + obj->filp->private_data = obj; > + > + ret = vgem_gem_get_pages(to_vgem_bo(obj)); > + if (ret) > + goto fail_get_pages; > + > + *offset = drm_vma_node_offset_addr(&obj->vma_node); > + > + goto unref; > + > +fail_get_pages: > + drm_gem_free_mmap_offset(obj); > +unref: > + drm_gem_object_unreference(obj); > +unlock: > + mutex_unlock(&dev->struct_mutex); > + return ret; > +} > + > +int vgem_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) > +{ > + int ret; > + > + ret = drm_gem_mmap(filp, vma); > + if (ret) > + return ret; > + > + vma->vm_flags &= ~VM_PFNMAP; > + vma->vm_flags |= VM_MIXEDMAP; > + > + return ret; > +} > + > + > +static struct drm_ioctl_desc vgem_ioctls[] = { > +}; > + > +static const struct file_operations vgem_driver_fops = { > + .owner = THIS_MODULE, > + .open = drm_open, > + .mmap = vgem_drm_gem_mmap, > + .poll = drm_poll, > + .read = drm_read, > + .unlocked_ioctl = drm_ioctl, > + .release = drm_release, > +}; > + > +static struct drm_driver vgem_driver = { > + .driver_features = DRIVER_GEM | DRIVER_PRIME, > + .load = vgem_load, > + .unload = vgem_unload, > + .open = vgem_open, > + .preclose = vgem_preclose, > + .lastclose = vgem_lastclose, > + .gem_free_object = vgem_gem_free_object, > + .gem_vm_ops = &vgem_gem_vm_ops, > + .ioctls = vgem_ioctls, > + .fops = &vgem_driver_fops, > + .dumb_create = vgem_gem_dumb_create, > + .dumb_map_offset = vgem_gem_dumb_map, > + .prime_handle_to_fd = drm_gem_prime_handle_to_fd, > + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, > + .gem_prime_export = vgem_gem_prime_export, > + .gem_prime_import = vgem_gem_prime_import, > + .name = DRIVER_NAME, > + .desc = DRIVER_DESC, > + .date = DRIVER_DATE, > + .major = DRIVER_MAJOR, > + .minor = DRIVER_MINOR, > +}; > + > +static int vgem_platform_probe(struct platform_device *pdev) > +{ > + vgem_driver.num_ioctls = ARRAY_SIZE(vgem_ioctls); > + > + return drm_platform_init(&vgem_driver, pdev); > +} > + > +static int vgem_platform_remove(struct platform_device *pdev) > +{ > + drm_put_dev(platform_get_drvdata(pdev)); > + return 0; > +} > + > +static struct platform_driver vgem_platform_driver = { > + .probe = vgem_platform_probe, > + .remove = vgem_platform_remove, > + .driver = { > + .owner = THIS_MODULE, > + .name = DRIVER_NAME, > + }, > +}; > + > +static struct platform_device *vgem_device; > + > +static int __init vgem_init(void) > +{ > + int ret; > + > + ret = platform_driver_register(&vgem_platform_driver); > + if (ret) > + goto out; > + > + vgem_device = platform_device_alloc("vgem", -1); > + if (!vgem_device) { > + ret = -ENOMEM; > + goto unregister_out; > + } > + > + vgem_device->dev.coherent_dma_mask = ~0ULL; > + vgem_device->dev.dma_mask = &vgem_device->dev.coherent_dma_mask; > + > + ret = platform_device_add(vgem_device); > + if (ret) > + goto put_out; With the new drm_dev_alloc/register functions you should be able to set up a virtual drm device without any real struct device as backing object. That would also allow you to ditch the almost-dummy ->load function, since ->load is somewhat deprecated. tegra is already converted, but the 2 new drm drivers also initialize the new way so there's a bunch of examples. > + > + return 0; > + > +put_out: > + platform_device_put(vgem_device); > +unregister_out: > + platform_driver_unregister(&vgem_platform_driver); > +out: > + return ret; > +} > + > +static void __exit vgem_exit(void) > +{ > + platform_device_unregister(vgem_device); > + platform_driver_unregister(&vgem_platform_driver); > +} > + > +module_init(vgem_init); > +module_exit(vgem_exit); > + > +MODULE_AUTHOR("Red Hat, Inc."); > +MODULE_DESCRIPTION(DRIVER_DESC); > +MODULE_LICENSE("GPL and additional rights"); > diff --git a/drivers/gpu/drm/vgem/vgem_drv.h b/drivers/gpu/drm/vgem/vgem_drv.h > new file mode 100644 > index 0000000..6d889a2 > --- /dev/null > +++ b/drivers/gpu/drm/vgem/vgem_drv.h > @@ -0,0 +1,62 @@ > +/* > + * Copyright © 2012 Intel Corporation > + * Copyright © 2014 The Chromium OS Authors > + * > + * Permission is hereby granted, free of charge, to any person obtaining a > + * copy of this software and associated documentation files (the "Software"), > + * to deal in the Software without restriction, including without limitation > + * the rights to use, copy, modify, merge, publish, distribute, sublicense, > + * and/or sell copies of the Software, and to permit persons to whom the > + * Software is furnished to do so, subject to the following conditions: > + * > + * The above copyright notice and this permission notice (including the next > + * paragraph) shall be included in all copies or substantial portions of the > + * Software. > + * > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL > + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS > + * IN THE SOFTWARE. > + * > + * Authors: > + * Ben Widawsky <ben@bwidawsk.net> > + * > + */ > + > +#ifndef _VGEM_DRV_H_ > +#define _VGEM_DRV_H_ > + > +#include <drm/drmP.h> > +#include <drm/drm_gem.h> > + > +#define to_vgem_bo(x) container_of(x, struct drm_vgem_gem_object, base) > +struct drm_vgem_gem_object { > + struct drm_gem_object base; > + struct page **pages; > + struct sg_table *sg; > +}; > + > +/* vgem_drv.c */ > +extern void vgem_gem_put_pages(struct drm_vgem_gem_object *obj); > +extern int vgem_gem_get_pages(struct drm_vgem_gem_object *obj); > + > +/* vgem_dma_buf.c */ > +extern int vgem_prime_to_fd(struct drm_device *dev, > + struct drm_file *file_priv, > + uint32_t handle, int *prime_fd); > + > +extern int vgem_prime_to_handle(struct drm_device *dev, > + struct drm_file *file_priv, > + int prime_fd, uint32_t *handle); > + > +extern struct dma_buf *vgem_gem_prime_export(struct drm_device *dev, > + struct drm_gem_object *obj, > + int flags); > + > +extern struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev, > + struct dma_buf *dma_buf); > + > +#endif > -- > 2.1.0.rc2.206.gedb03e5 > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, 2014-11-20 at 16:26 -0800, Zach Reizner wrote: > This patch implements the virtual GEM driver with PRIME sharing which > allows vgem to import a gem object from other drivers for the purpose > of mmap-ing them to userspace. The reason I initially wanted this was to get zero-copy llvmpipe, since (besides making GLX conformance impossible) the image transfer parts of drisw are easily the biggest part of the runtime overhead. Is there a userspace consumer of this anywhere? - ajax
ajax: The consumer of this will be software renderers. We're working on one right now that will be using dma-bufs from userspace. Daniel: Thanks for your suggestions. I'll work on it and submit a v2. On Fri Nov 21 2014 at 6:02:45 AM Adam Jackson <ajax@redhat.com> wrote: > On Thu, 2014-11-20 at 16:26 -0800, Zach Reizner wrote: > > This patch implements the virtual GEM driver with PRIME sharing which > > allows vgem to import a gem object from other drivers for the purpose > > of mmap-ing them to userspace. > > The reason I initially wanted this was to get zero-copy llvmpipe, since > (besides making GLX conformance impossible) the image transfer parts of > drisw are easily the biggest part of the runtime overhead. Is there a > userspace consumer of this anywhere? > > - ajax > >
After looking into removing platform_device, I found that using dma_buf_attach with a NULL device always returns an error, thereby preventing me from using VGEM for import and mmap. The solution seems to be to skip using dma_buf_attach, and instead use dma_buf_mmap when user-space tries to mmap a gem object that was imported into VGEM. The drawback to this approach is that most drivers stub their dma_buf_ops->mmap implementation. Presumably mmap could be implemented for the drivers that this would make sense for. Are there any comments, questions, or concerns for this proposed solution? On Fri Nov 21 2014 at 10:45:08 AM Zachary Reizner <zachr@google.com> wrote: > ajax: The consumer of this will be software renderers. We're working on > one right now that will be using dma-bufs from userspace. > Daniel: Thanks for your suggestions. I'll work on it and submit a v2. > On Fri Nov 21 2014 at 6:02:45 AM Adam Jackson <ajax@redhat.com> wrote: > >> On Thu, 2014-11-20 at 16:26 -0800, Zach Reizner wrote: >> > This patch implements the virtual GEM driver with PRIME sharing which >> > allows vgem to import a gem object from other drivers for the purpose >> > of mmap-ing them to userspace. >> >> The reason I initially wanted this was to get zero-copy llvmpipe, since >> (besides making GLX conformance impossible) the image transfer parts of >> drisw are easily the biggest part of the runtime overhead. Is there a >> userspace consumer of this anywhere? >> >> - ajax >> >>
On 11/25/2014 02:08 AM, Zachary Reizner wrote: > After looking into removing platform_device, I found that using > dma_buf_attach with a NULL device always returns an error, thereby > preventing me from using VGEM for import and mmap. The solution seems > to be to skip using dma_buf_attach, and instead use dma_buf_mmap when > user-space tries to mmap a gem object that was imported into VGEM. The > drawback to this approach is that most drivers stub their > dma_buf_ops->mmap implementation. Presumably mmap could be implemented > for the drivers that this would make sense for. Are there any > comments, questions, or concerns for this proposed solution? I see now that this driver has entered -next, and I'm sorry this comment didn't arrive before. I simply missed this discussion :( My biggest concern, as stated many many times before, is that dma-buf mmap is a horrible interface for incoherent drivers, and for drivers that use odd format (tiled) dma-bufs, basically since it doesn't supply a dirtied region. Therefore (correct me if I'm wrong) there has been an agreement that for purposes outside of ARM SOC, we should simply not implement dma-buf mmap for other uses than for internal driver use. So assume a real driver implements dma-buf mmap, but it is crawling due to coherency- or untiling / tiling operations. How do you tell a generic user of the vgem driver *NOT* to mmap for performance reasons? Or is this driver only intended for ARM SOC systems? Thanks, Thomas
On Thu, Apr 02, 2015 at 10:30:53AM +0200, Thomas Hellstrom wrote: > On 11/25/2014 02:08 AM, Zachary Reizner wrote: > > After looking into removing platform_device, I found that using > > dma_buf_attach with a NULL device always returns an error, thereby > > preventing me from using VGEM for import and mmap. The solution seems > > to be to skip using dma_buf_attach, and instead use dma_buf_mmap when > > user-space tries to mmap a gem object that was imported into VGEM. The > > drawback to this approach is that most drivers stub their > > dma_buf_ops->mmap implementation. Presumably mmap could be implemented > > for the drivers that this would make sense for. Are there any > > comments, questions, or concerns for this proposed solution? > > I see now that this driver has entered -next, and I'm sorry this comment > didn't arrive before. I simply missed this discussion :( > > My biggest concern, as stated many many times before, is that dma-buf > mmap is a horrible interface for incoherent drivers, and for drivers > that use odd format (tiled) dma-bufs, basically since it doesn't supply > a dirtied region. Therefore (correct me if I'm wrong) there has been an > agreement that for purposes outside of ARM SOC, we should simply not > implement dma-buf mmap for other uses than for internal driver use. > > So assume a real driver implements dma-buf mmap, but it is crawling due > to coherency- or untiling / tiling operations. How do you tell a generic > user of the vgem driver *NOT* to mmap for performance reasons? Or is > this driver only intended for ARM SOC systems? Seconded. Somehow I thought we've pulled in vgem to support software rendering like llvmpipe, and I remember that that's been the original justification. TIL that that's indeed not the case and google is splattering their cros tree with dma_buf->mmap implementations this is obviously not the case and the intention really seems to be to use dma_buf->mmap and vgem as the generic interface to expose buffer objects of real drivers to software rendering. Given that neither vgem nor dma_buf->mmap has any sane concept of handling coherency I'm really unhappy about this and tempted to just submit the revert for vgem before 4.1 ships. I'll chat with relevant people a bit more. Worse I chatted with Stephane today and he brushed this off as not-my-problem and if this hurts intel intel should fix this. That's not how a proper usptream interface is getting designd, and coherency handling is an even more serious problem on arm an virtual hw like vmwgfx. On intel (well at least big core thanks to the huge coherent cache fabric) this is mostly a non-issue, except that the patch in the cros tree obviously gets things wrong. Decently pissed tbh. Cheers, Daniel
On 05/21/2015 11:13 AM, Daniel Vetter wrote: > On Thu, Apr 02, 2015 at 10:30:53AM +0200, Thomas Hellstrom wrote: >> On 11/25/2014 02:08 AM, Zachary Reizner wrote: >>> After looking into removing platform_device, I found that using >>> dma_buf_attach with a NULL device always returns an error, thereby >>> preventing me from using VGEM for import and mmap. The solution seems >>> to be to skip using dma_buf_attach, and instead use dma_buf_mmap when >>> user-space tries to mmap a gem object that was imported into VGEM. The >>> drawback to this approach is that most drivers stub their >>> dma_buf_ops->mmap implementation. Presumably mmap could be implemented >>> for the drivers that this would make sense for. Are there any >>> comments, questions, or concerns for this proposed solution? >> I see now that this driver has entered -next, and I'm sorry this comment >> didn't arrive before. I simply missed this discussion :( >> >> My biggest concern, as stated many many times before, is that dma-buf >> mmap is a horrible interface for incoherent drivers, and for drivers >> that use odd format (tiled) dma-bufs, basically since it doesn't supply >> a dirtied region. Therefore (correct me if I'm wrong) there has been an >> agreement that for purposes outside of ARM SOC, we should simply not >> implement dma-buf mmap for other uses than for internal driver use. >> >> So assume a real driver implements dma-buf mmap, but it is crawling due >> to coherency- or untiling / tiling operations. How do you tell a generic >> user of the vgem driver *NOT* to mmap for performance reasons? Or is >> this driver only intended for ARM SOC systems? > Seconded. Somehow I thought we've pulled in vgem to support software > rendering like llvmpipe, and I remember that that's been the original > justification. TIL that that's indeed not the case and google is > splattering their cros tree with dma_buf->mmap implementations this is > obviously not the case and the intention really seems to be to use > dma_buf->mmap and vgem as the generic interface to expose buffer objects > of real drivers to software rendering. > > Given that neither vgem nor dma_buf->mmap has any sane concept of handling > coherency I'm really unhappy about this and tempted to just submit the > revert for vgem before 4.1 ships. I'll chat with relevant people a bit > more. Worse I chatted with Stephane today and he brushed this off as > not-my-problem and if this hurts intel intel should fix this. That's not > how a proper usptream interface is getting designd, and coherency handling > is an even more serious problem on arm an virtual hw like vmwgfx. So given how this has turned out, my opinion is that before a usable generic mmap of accelerated buffer objects goes upstream, there should be a proper interface to request regions present and to dirty regions. It seems to me that so far all use-cases are for one- or two-dimensional access so it should be sufficient to start with that and add other access modes later on. Now this is no guarantee that people won't request and dirty the whole dma-buf on each access, but at least that would make people think, and if things become slow it's pretty clear where the problem is. I'm all for delaying vgem until we have such an interface in place. Thanks, Thomas > On intel > (well at least big core thanks to the huge coherent cache fabric) this is > mostly a non-issue, except that the patch in the cros tree obviously gets > things wrong. > > Decently pissed tbh. > > Cheers, Daniel
On Thu, May 21, 2015 at 9:49 AM, Thomas Hellstrom <thellstrom@vmware.com> wrote: > On 05/21/2015 11:13 AM, Daniel Vetter wrote: >> On Thu, Apr 02, 2015 at 10:30:53AM +0200, Thomas Hellstrom wrote: >>> On 11/25/2014 02:08 AM, Zachary Reizner wrote: >>>> After looking into removing platform_device, I found that using >>>> dma_buf_attach with a NULL device always returns an error, thereby >>>> preventing me from using VGEM for import and mmap. The solution seems >>>> to be to skip using dma_buf_attach, and instead use dma_buf_mmap when >>>> user-space tries to mmap a gem object that was imported into VGEM. The >>>> drawback to this approach is that most drivers stub their >>>> dma_buf_ops->mmap implementation. Presumably mmap could be implemented >>>> for the drivers that this would make sense for. Are there any >>>> comments, questions, or concerns for this proposed solution? >>> I see now that this driver has entered -next, and I'm sorry this comment >>> didn't arrive before. I simply missed this discussion :( >>> >>> My biggest concern, as stated many many times before, is that dma-buf >>> mmap is a horrible interface for incoherent drivers, and for drivers >>> that use odd format (tiled) dma-bufs, basically since it doesn't supply >>> a dirtied region. Therefore (correct me if I'm wrong) there has been an >>> agreement that for purposes outside of ARM SOC, we should simply not >>> implement dma-buf mmap for other uses than for internal driver use. >>> >>> So assume a real driver implements dma-buf mmap, but it is crawling due >>> to coherency- or untiling / tiling operations. How do you tell a generic >>> user of the vgem driver *NOT* to mmap for performance reasons? Or is >>> this driver only intended for ARM SOC systems? >> Seconded. Somehow I thought we've pulled in vgem to support software >> rendering like llvmpipe, and I remember that that's been the original >> justification. TIL that that's indeed not the case and google is >> splattering their cros tree with dma_buf->mmap implementations this is >> obviously not the case and the intention really seems to be to use >> dma_buf->mmap and vgem as the generic interface to expose buffer objects >> of real drivers to software rendering. >> >> Given that neither vgem nor dma_buf->mmap has any sane concept of handling >> coherency I'm really unhappy about this and tempted to just submit the >> revert for vgem before 4.1 ships. I'll chat with relevant people a bit >> more. Worse I chatted with Stephane today and he brushed this off as >> not-my-problem and if this hurts intel intel should fix this. That's not >> how a proper usptream interface is getting designd, and coherency handling >> is an even more serious problem on arm an virtual hw like vmwgfx. > > So given how this has turned out, my opinion is that before a usable > generic mmap of accelerated buffer objects > goes upstream, there should be a proper interface to request regions > present and to dirty regions. It seems to me that so far > all use-cases are for one- or two-dimensional access so it should be > sufficient to start with that and add other access > modes later on. Now this is no guarantee that people won't request and > dirty the whole dma-buf on each access, but at least > that would make people think, and if things become slow it's pretty > clear where the problem is. > > I'm all for delaying vgem until we have such an interface in place. so, for llvmpipe use-case, I think dri2 is sufficient. So why don't we just drop DRIVER_PRIME flag for now.. BR, -R > Thanks, > Thomas > > > > >> On intel >> (well at least big core thanks to the huge coherent cache fabric) this is >> mostly a non-issue, except that the patch in the cros tree obviously gets >> things wrong. >> >> Decently pissed tbh. >> >> Cheers, Daniel > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel
On 05/21/2015 04:19 PM, Rob Clark wrote: > On Thu, May 21, 2015 at 9:49 AM, Thomas Hellstrom <thellstrom@vmware.com> wrote: >> On 05/21/2015 11:13 AM, Daniel Vetter wrote: >>> On Thu, Apr 02, 2015 at 10:30:53AM +0200, Thomas Hellstrom wrote: >>>> On 11/25/2014 02:08 AM, Zachary Reizner wrote: >>>>> After looking into removing platform_device, I found that using >>>>> dma_buf_attach with a NULL device always returns an error, thereby >>>>> preventing me from using VGEM for import and mmap. The solution seems >>>>> to be to skip using dma_buf_attach, and instead use dma_buf_mmap when >>>>> user-space tries to mmap a gem object that was imported into VGEM. The >>>>> drawback to this approach is that most drivers stub their >>>>> dma_buf_ops->mmap implementation. Presumably mmap could be implemented >>>>> for the drivers that this would make sense for. Are there any >>>>> comments, questions, or concerns for this proposed solution? >>>> I see now that this driver has entered -next, and I'm sorry this comment >>>> didn't arrive before. I simply missed this discussion :( >>>> >>>> My biggest concern, as stated many many times before, is that dma-buf >>>> mmap is a horrible interface for incoherent drivers, and for drivers >>>> that use odd format (tiled) dma-bufs, basically since it doesn't supply >>>> a dirtied region. Therefore (correct me if I'm wrong) there has been an >>>> agreement that for purposes outside of ARM SOC, we should simply not >>>> implement dma-buf mmap for other uses than for internal driver use. >>>> >>>> So assume a real driver implements dma-buf mmap, but it is crawling due >>>> to coherency- or untiling / tiling operations. How do you tell a generic >>>> user of the vgem driver *NOT* to mmap for performance reasons? Or is >>>> this driver only intended for ARM SOC systems? >>> Seconded. Somehow I thought we've pulled in vgem to support software >>> rendering like llvmpipe, and I remember that that's been the original >>> justification. TIL that that's indeed not the case and google is >>> splattering their cros tree with dma_buf->mmap implementations this is >>> obviously not the case and the intention really seems to be to use >>> dma_buf->mmap and vgem as the generic interface to expose buffer objects >>> of real drivers to software rendering. >>> >>> Given that neither vgem nor dma_buf->mmap has any sane concept of handling >>> coherency I'm really unhappy about this and tempted to just submit the >>> revert for vgem before 4.1 ships. I'll chat with relevant people a bit >>> more. Worse I chatted with Stephane today and he brushed this off as >>> not-my-problem and if this hurts intel intel should fix this. That's not >>> how a proper usptream interface is getting designd, and coherency handling >>> is an even more serious problem on arm an virtual hw like vmwgfx. >> So given how this has turned out, my opinion is that before a usable >> generic mmap of accelerated buffer objects >> goes upstream, there should be a proper interface to request regions >> present and to dirty regions. It seems to me that so far >> all use-cases are for one- or two-dimensional access so it should be >> sufficient to start with that and add other access >> modes later on. Now this is no guarantee that people won't request and >> dirty the whole dma-buf on each access, but at least >> that would make people think, and if things become slow it's pretty >> clear where the problem is. >> >> I'm all for delaying vgem until we have such an interface in place. > so, for llvmpipe use-case, I think dri2 is sufficient. So why don't > we just drop DRIVER_PRIME flag for now.. > > BR, > -R Fine with me! /Thomas > >> Thanks, >> Thomas >> >> >> >> >>> On intel >>> (well at least big core thanks to the huge coherent cache fabric) this is >>> mostly a non-issue, except that the patch in the cros tree obviously gets >>> things wrong. >>> >>> Decently pissed tbh. >>> >>> Cheers, Daniel >> _______________________________________________ >> dri-devel mailing list >> dri-devel@lists.freedesktop.org >> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.freedesktop.org_mailman_listinfo_dri-2Ddevel&d=AwIBaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=vpukPkBtpoNQp2IUKuFviOmPNYWVKmen3Jeeu55zmEA&m=sLK_B5IzlsF7zUqLCrxPQUtxzZt77RtzKxrZaJGYews&s=tfXDNoFQ1d044YyOewiTg_Qa0gsSvjOZGT603lZtZVo&e=
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index e3b4b0f..39909bc 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -165,6 +165,15 @@ config DRM_SAVAGE Choose this option if you have a Savage3D/4/SuperSavage/Pro/Twister chipset. If M is selected the module will be called savage. +config DRM_VGEM + tristate "Virtual GEM provider" + depends on DRM + help + Choose this option to get a virtual graphics memory manager, + as used by Mesa's software renderer for enhanced performance. + If M is selected the module will be called vgem. + + source "drivers/gpu/drm/exynos/Kconfig" source "drivers/gpu/drm/vmwgfx/Kconfig" diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index c3cf64c..c1e4a0e 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -47,6 +47,7 @@ obj-$(CONFIG_DRM_SIS) += sis/ obj-$(CONFIG_DRM_SAVAGE)+= savage/ obj-$(CONFIG_DRM_VMWGFX)+= vmwgfx/ obj-$(CONFIG_DRM_VIA) +=via/ +obj-$(CONFIG_DRM_VGEM) += vgem/ obj-$(CONFIG_DRM_NOUVEAU) +=nouveau/ obj-$(CONFIG_DRM_EXYNOS) +=exynos/ obj-$(CONFIG_DRM_GMA500) += gma500/ diff --git a/drivers/gpu/drm/vgem/Makefile b/drivers/gpu/drm/vgem/Makefile new file mode 100644 index 0000000..1055cb7 --- /dev/null +++ b/drivers/gpu/drm/vgem/Makefile @@ -0,0 +1,4 @@ +ccflags-y := -Iinclude/drm +vgem-y := vgem_drv.o vgem_dma_buf.o + +obj-$(CONFIG_DRM_VGEM) += vgem.o diff --git a/drivers/gpu/drm/vgem/vgem_dma_buf.c b/drivers/gpu/drm/vgem/vgem_dma_buf.c new file mode 100644 index 0000000..8450124 --- /dev/null +++ b/drivers/gpu/drm/vgem/vgem_dma_buf.c @@ -0,0 +1,169 @@ +/* + * Copyright © 2012 Intel Corporation + * Copyright © 2014 The Chromium OS Authors + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + * + * Authors: + * Ben Widawsky <ben@bwidawsk.net> + * + */ + +#include <linux/dma-buf.h> +#include "vgem_drv.h" + +#define VGEM_FD_PERMS 0600 + +static struct sg_table *vgem_gem_map_dma_buf(struct dma_buf_attachment *attach, + enum dma_data_direction dir) +{ + struct drm_vgem_gem_object *obj = attach->dmabuf->priv; + struct sg_table *sg; + int ret; + + ret = vgem_gem_get_pages(obj); + if (ret) + return ERR_PTR(ret); + + /* VGEM assumes cache coherent access. Normally we might have to flush + * caches here */ + + BUG_ON(obj->pages == NULL); + + sg = drm_prime_pages_to_sg(obj->pages, obj->base.size / PAGE_SIZE); + if (!sg) { + vgem_gem_put_pages(obj); + return NULL; + } + + return sg; +} + +static void vgem_gem_unmap_dma_buf(struct dma_buf_attachment *attach, + struct sg_table *sg, + enum dma_data_direction data_direction) +{ + sg_free_table(sg); + kfree(sg); +} + +static void *vgem_kmap_atomic_dma_buf(struct dma_buf *dma_buf, + unsigned long page_num) +{ + return NULL; +} + +static void *vgem_kmap_dma_buf(struct dma_buf *dma_buf, + unsigned long page_num) +{ + return NULL; +} + +static int vgem_mmap_dma_buf(struct dma_buf *dma_buf, + struct vm_area_struct *vma) +{ + return -EINVAL; +} + +static struct dma_buf_ops vgem_dmabuf_ops = { + .map_dma_buf = vgem_gem_map_dma_buf, + .unmap_dma_buf = vgem_gem_unmap_dma_buf, + .release = drm_gem_dmabuf_release, + .kmap_atomic = vgem_kmap_atomic_dma_buf, + .kmap = vgem_kmap_dma_buf, + .mmap = vgem_mmap_dma_buf, +}; + +struct dma_buf *vgem_gem_prime_export(struct drm_device *dev, + struct drm_gem_object *obj, + int flags) +{ + return dma_buf_export(obj, &vgem_dmabuf_ops, obj->size, flags, NULL); +} + +struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev, + struct dma_buf *dma_buf) +{ + struct drm_vgem_gem_object *obj = NULL; + struct dma_buf_attachment *attach = NULL; + struct sg_table *sg = NULL; + int num_pages; + int ret; + + attach = dma_buf_attach(dma_buf, dev->dev); + if (IS_ERR(attach)) { + ret = PTR_ERR(attach); + goto fail; + } + + get_dma_buf(dma_buf); + + sg = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + if (IS_ERR(sg)) { + ret = PTR_ERR(sg); + goto fail_detach; + } + + obj = kzalloc(sizeof(*obj), GFP_KERNEL); + if (obj == NULL) { + ret = -ENOMEM; + goto fail_unmap; + } + + obj->base.import_attach = attach; + obj->sg = sg; + + /* As a result of this mmap will not work -yet- */ + ret = drm_gem_object_init(dev, &obj->base, dma_buf->size); + if (ret) { + ret = -ENOMEM; + goto fail_free; + } + + num_pages = obj->base.size / PAGE_SIZE; + + obj->pages = drm_malloc_ab(num_pages, sizeof(struct page *)); + if (!obj->pages) { + ret = -ENOMEM; + goto fail_gem_release; + } + + ret = drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, num_pages); + if (ret) { + ret = -ENOMEM; + goto fail_free_pages; + } + + return &obj->base; + +fail_free_pages: + drm_free_large(obj->pages); +fail_gem_release: + drm_gem_object_release(&obj->base); +fail_free: + kfree(obj); +fail_unmap: + dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL); +fail_detach: + dma_buf_detach(dma_buf, attach); + dma_buf_put(dma_buf); +fail: + return ERR_PTR(ret); +} diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c new file mode 100644 index 0000000..b0032c9 --- /dev/null +++ b/drivers/gpu/drm/vgem/vgem_drv.c @@ -0,0 +1,407 @@ +/* + * Copyright 2011 Red Hat, Inc. + * Copyright © 2014 The Chromium OS Authors + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software") + * to deal in the software without restriction, including without limitation + * on the rights to use, copy, modify, merge, publish, distribute, sub + * license, and/or sell copies of the Software, and to permit persons to whom + * them Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTIBILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER + * IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Adam Jackson <ajax@redhat.com> + * Ben Widawsky <ben@bwidawsk.net> + */ + +/** + * This is vgem, a (non-hardware-backed) GEM service. This is used by Mesa's + * software renderer and the X server for efficient buffer sharing. + */ + +#include <linux/module.h> +#include <linux/ramfs.h> +#include <linux/shmem_fs.h> +#include "vgem_drv.h" + +#define DRIVER_NAME "vgem" +#define DRIVER_DESC "Virtual GEM provider" +#define DRIVER_DATE "20120112" +#define DRIVER_MAJOR 1 +#define DRIVER_MINOR 0 + +static int vgem_load(struct drm_device *dev, unsigned long flags) +{ + platform_set_drvdata(dev->platformdev, dev); + return 0; +} + +static int vgem_unload(struct drm_device *dev) +{ + return 0; +} + +static int vgem_open(struct drm_device *dev, struct drm_file *file) +{ + return 0; +} + +static void vgem_preclose(struct drm_device *dev, struct drm_file *file) +{ +} + +static void vgem_lastclose(struct drm_device *dev) +{ +} + +void vgem_gem_put_pages(struct drm_vgem_gem_object *obj) +{ + int num_pages = obj->base.size / PAGE_SIZE; + int i; + + for (i = 0; i < num_pages; i++) { + if (obj->pages[i] == NULL) + break; + page_cache_release(obj->pages[i]); + } + + drm_free_large(obj->pages); + obj->pages = NULL; +} + +static void vgem_gem_free_object(struct drm_gem_object *obj) +{ + struct drm_vgem_gem_object *vgem_obj = to_vgem_bo(obj); + + drm_gem_free_mmap_offset(obj); + + if (obj->import_attach) { + drm_prime_gem_destroy(obj, vgem_obj->sg); + vgem_obj->pages = NULL; + } + + drm_gem_object_release(obj); + + if (vgem_obj->pages) + vgem_gem_put_pages(vgem_obj); + + vgem_obj->pages = NULL; + + kfree(vgem_obj); +} + +int vgem_gem_get_pages(struct drm_vgem_gem_object *obj) +{ + struct address_space *mapping; + gfp_t gfpmask = GFP_KERNEL; + int num_pages, i, ret = 0; + + num_pages = obj->base.size / PAGE_SIZE; + + if (!obj->pages) { + obj->pages = drm_malloc_ab(num_pages, sizeof(struct page *)); + if (obj->pages == NULL) + return -ENOMEM; + } else { + return 0; + } + + mapping = obj->base.filp->f_path.dentry->d_inode->i_mapping; + gfpmask |= mapping_gfp_mask(mapping); + + for (i = 0; i < num_pages; i++) { + struct page *page; + obj->pages[i] = NULL; + page = shmem_read_mapping_page_gfp(mapping, i, gfpmask); + if (IS_ERR(page)) { + ret = PTR_ERR(page); + goto err_out; + } + obj->pages[i] = page; + } + + return ret; + +err_out: + vgem_gem_put_pages(obj); + return ret; +} + +static int vgem_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + struct drm_vgem_gem_object *obj = to_vgem_bo(vma->vm_private_data); + struct drm_device *dev = obj->base.dev; + loff_t num_pages; + pgoff_t page_offset; + int ret; + + /* We don't use vmf->pgoff since that has the fake offset */ + page_offset = ((unsigned long)vmf->virtual_address - vma->vm_start) >> + PAGE_SHIFT; + + num_pages = DIV_ROUND_UP(obj->base.size, PAGE_SIZE); + + if (page_offset > num_pages) + return VM_FAULT_SIGBUS; + + mutex_lock(&dev->struct_mutex); + + ret = vm_insert_page(vma, (unsigned long)vmf->virtual_address, + obj->pages[page_offset]); + + mutex_unlock(&dev->struct_mutex); + switch (ret) { + case 0: + return VM_FAULT_NOPAGE; + case -ENOMEM: + return VM_FAULT_OOM; + case -EBUSY: + return VM_FAULT_RETRY; + case -EFAULT: + case -EINVAL: + return VM_FAULT_SIGBUS; + default: + WARN_ON(1); + return VM_FAULT_SIGBUS; + } +} + +static struct vm_operations_struct vgem_gem_vm_ops = { + .fault = vgem_gem_fault, + .open = drm_gem_vm_open, + .close = drm_gem_vm_close, +}; + +/* ioctls */ + +static struct drm_gem_object *vgem_gem_create(struct drm_device *dev, + struct drm_file *file, + unsigned int *handle, + unsigned long size) +{ + struct drm_vgem_gem_object *obj; + struct drm_gem_object *gem_object; + int err; + + size = roundup(size, PAGE_SIZE); + + obj = kzalloc(sizeof(*obj), GFP_KERNEL); + if (!obj) + return ERR_PTR(-ENOMEM); + + gem_object = &obj->base; + + err = drm_gem_object_init(dev, gem_object, size); + if (err) + goto out; + + err = drm_gem_handle_create(file, gem_object, handle); + if (err) + goto handle_out; + + drm_gem_object_unreference_unlocked(gem_object); + + return gem_object; + +handle_out: + drm_gem_object_release(gem_object); +out: + kfree(obj); + return ERR_PTR(err); +} + +static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev, + struct drm_mode_create_dumb *args) +{ + struct drm_gem_object *gem_object; + uint64_t size; + + size = args->height * args->width * DIV_ROUND_UP(args->bpp, 8); + if (size == 0) + return -EINVAL; + + gem_object = vgem_gem_create(dev, file, &args->handle, size); + + if (IS_ERR(gem_object)) { + DRM_DEBUG_DRIVER("object creation failed\n"); + return PTR_ERR(gem_object); + } + + args->size = gem_object->size; + args->pitch = args->width; + + DRM_DEBUG_DRIVER("Created object of size %lld\n", size); + + return 0; +} + +int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev, + uint32_t handle, uint64_t *offset) +{ + int ret = 0; + struct drm_gem_object *obj; + + mutex_lock(&dev->struct_mutex); + obj = drm_gem_object_lookup(dev, file, handle); + if (!obj) { + ret = -ENOENT; + goto unlock; + } + + if (!drm_vma_node_has_offset(&obj->vma_node)) { + ret = drm_gem_create_mmap_offset(obj); + if (ret) + goto unref; + } + + BUG_ON(!obj->filp); + + obj->filp->private_data = obj; + + ret = vgem_gem_get_pages(to_vgem_bo(obj)); + if (ret) + goto fail_get_pages; + + *offset = drm_vma_node_offset_addr(&obj->vma_node); + + goto unref; + +fail_get_pages: + drm_gem_free_mmap_offset(obj); +unref: + drm_gem_object_unreference(obj); +unlock: + mutex_unlock(&dev->struct_mutex); + return ret; +} + +int vgem_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) +{ + int ret; + + ret = drm_gem_mmap(filp, vma); + if (ret) + return ret; + + vma->vm_flags &= ~VM_PFNMAP; + vma->vm_flags |= VM_MIXEDMAP; + + return ret; +} + + +static struct drm_ioctl_desc vgem_ioctls[] = { +}; + +static const struct file_operations vgem_driver_fops = { + .owner = THIS_MODULE, + .open = drm_open, + .mmap = vgem_drm_gem_mmap, + .poll = drm_poll, + .read = drm_read, + .unlocked_ioctl = drm_ioctl, + .release = drm_release, +}; + +static struct drm_driver vgem_driver = { + .driver_features = DRIVER_GEM | DRIVER_PRIME, + .load = vgem_load, + .unload = vgem_unload, + .open = vgem_open, + .preclose = vgem_preclose, + .lastclose = vgem_lastclose, + .gem_free_object = vgem_gem_free_object, + .gem_vm_ops = &vgem_gem_vm_ops, + .ioctls = vgem_ioctls, + .fops = &vgem_driver_fops, + .dumb_create = vgem_gem_dumb_create, + .dumb_map_offset = vgem_gem_dumb_map, + .prime_handle_to_fd = drm_gem_prime_handle_to_fd, + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, + .gem_prime_export = vgem_gem_prime_export, + .gem_prime_import = vgem_gem_prime_import, + .name = DRIVER_NAME, + .desc = DRIVER_DESC, + .date = DRIVER_DATE, + .major = DRIVER_MAJOR, + .minor = DRIVER_MINOR, +}; + +static int vgem_platform_probe(struct platform_device *pdev) +{ + vgem_driver.num_ioctls = ARRAY_SIZE(vgem_ioctls); + + return drm_platform_init(&vgem_driver, pdev); +} + +static int vgem_platform_remove(struct platform_device *pdev) +{ + drm_put_dev(platform_get_drvdata(pdev)); + return 0; +} + +static struct platform_driver vgem_platform_driver = { + .probe = vgem_platform_probe, + .remove = vgem_platform_remove, + .driver = { + .owner = THIS_MODULE, + .name = DRIVER_NAME, + }, +}; + +static struct platform_device *vgem_device; + +static int __init vgem_init(void) +{ + int ret; + + ret = platform_driver_register(&vgem_platform_driver); + if (ret) + goto out; + + vgem_device = platform_device_alloc("vgem", -1); + if (!vgem_device) { + ret = -ENOMEM; + goto unregister_out; + } + + vgem_device->dev.coherent_dma_mask = ~0ULL; + vgem_device->dev.dma_mask = &vgem_device->dev.coherent_dma_mask; + + ret = platform_device_add(vgem_device); + if (ret) + goto put_out; + + return 0; + +put_out: + platform_device_put(vgem_device); +unregister_out: + platform_driver_unregister(&vgem_platform_driver); +out: + return ret; +} + +static void __exit vgem_exit(void) +{ + platform_device_unregister(vgem_device); + platform_driver_unregister(&vgem_platform_driver); +} + +module_init(vgem_init); +module_exit(vgem_exit); + +MODULE_AUTHOR("Red Hat, Inc."); +MODULE_DESCRIPTION(DRIVER_DESC); +MODULE_LICENSE("GPL and additional rights"); diff --git a/drivers/gpu/drm/vgem/vgem_drv.h b/drivers/gpu/drm/vgem/vgem_drv.h new file mode 100644 index 0000000..6d889a2 --- /dev/null +++ b/drivers/gpu/drm/vgem/vgem_drv.h @@ -0,0 +1,62 @@ +/* + * Copyright © 2012 Intel Corporation + * Copyright © 2014 The Chromium OS Authors + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + * + * Authors: + * Ben Widawsky <ben@bwidawsk.net> + * + */ + +#ifndef _VGEM_DRV_H_ +#define _VGEM_DRV_H_ + +#include <drm/drmP.h> +#include <drm/drm_gem.h> + +#define to_vgem_bo(x) container_of(x, struct drm_vgem_gem_object, base) +struct drm_vgem_gem_object { + struct drm_gem_object base; + struct page **pages; + struct sg_table *sg; +}; + +/* vgem_drv.c */ +extern void vgem_gem_put_pages(struct drm_vgem_gem_object *obj); +extern int vgem_gem_get_pages(struct drm_vgem_gem_object *obj); + +/* vgem_dma_buf.c */ +extern int vgem_prime_to_fd(struct drm_device *dev, + struct drm_file *file_priv, + uint32_t handle, int *prime_fd); + +extern int vgem_prime_to_handle(struct drm_device *dev, + struct drm_file *file_priv, + int prime_fd, uint32_t *handle); + +extern struct dma_buf *vgem_gem_prime_export(struct drm_device *dev, + struct drm_gem_object *obj, + int flags); + +extern struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev, + struct dma_buf *dma_buf); + +#endif