Message ID | 20200227002601.745-1-gurchetansingh@chromium.org (mailing list archive) |
---|---|
Headers | show |
Series | *** Refactor struct virtgpu_object *** | expand |
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote: > The main motivation behind this is to have eventually have something like this: > > struct virtio_gpu_shmem { > struct drm_gem_shmem_object base; > uint32_t hw_res_handle; > struct sg_table *pages; > (...) > }; > > struct virtio_gpu_vram { > struct drm_gem_object base; // or *drm_gem_vram_object > uint32_t hw_res_handle; > {offset, range}; > (...) > }; Given that we probably will not use drm_gem_vram_object and drm_gem_shmem_object->base is drm_gem_object I think we can go this route: struct virtgpu_object { struct drm_gem_shmem_object base; enum object_type; uint32_t hw_res_handle; [ ... ] }; struct virtgpu_object_shmem { struct virtgpu_object base; struct sg_table *pages; [ ... ] }; struct virtgpu_object_hostmem { struct virtgpu_object base; {offset, range}; (...) }; Then have helpers to get virtgpu_object_hostmem / virtgpu_object_shmem from virtgpu_object via container_of which also sanity-check object_type (maybe we can check drm_gem_object->ops for that instead of adding a new field). > Sending this out to solicit feedback on this approach. Whichever approach > we decide, landing incremental changes to internal structures is reduces > rebasing costs and avoids mega-changes. That certainly makes sense. cheers, Gerd
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
> The main motivation behind this is to have eventually have something like this:
patches 1+2 cherry-picked and pushed to -next.
thanks,
Gerd
On Wed, Feb 26, 2020 at 11:23 PM Gerd Hoffmann <kraxel@redhat.com> wrote: > > On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote: > > The main motivation behind this is to have eventually have something like this: > > > > struct virtio_gpu_shmem { > > struct drm_gem_shmem_object base; > > uint32_t hw_res_handle; > > struct sg_table *pages; > > (...) > > }; > > > > struct virtio_gpu_vram { > > struct drm_gem_object base; // or *drm_gem_vram_object > > uint32_t hw_res_handle; > > {offset, range}; > > (...) > > }; > > Given that we probably will not use drm_gem_vram_object Makes sense not to use drm_gem_vram_object for now. > and > drm_gem_shmem_object->base is drm_gem_object I think we can > go this route: > > struct virtgpu_object { Yeah, using "virtgpu_" rather than "virtio_gpu" makes sense. A bit less wordy, though the current code is based on "virtio_gpu". > struct drm_gem_shmem_object base; > enum object_type; > uint32_t hw_res_handle; > [ ... ] > }; > > struct virtgpu_object_shmem { > struct virtgpu_object base; > struct sg_table *pages; > [ ... ] > }; > > struct virtgpu_object_hostmem { > struct virtgpu_object base; > {offset, range}; > (...) I'm a kernel newbie, so it's not obvious to me why struct drm_gem_shmem_object would be a base class for struct virtgpu_object_hostmem? The core utility of drm_gem_shmem_helper seems to get pages, pinning pages, and releasing pages. But with host-mem, we won't have an array of pages, but just an (offset, length) -- which drm_gem_shmem_helper function is useful here? Side question: is drm_gem_object_funcs.vmap(..) / drm_gem_object_funcs.vunmap(..) even possible for hostmem? P.S: The proof of concept hostmem implementation is on Gitlab [1][2]. Some notes: - io_remap_pfn_range to get a userspace mapping - calls drm_gem_private_object_init(..) rather than drm_gem_object_init [which sets up shmemfs backing store]. [1] https://gitlab.freedesktop.org/virgl/drm-misc-next/-/blob/virtio-gpu-next/drivers/gpu/drm/virtio/virtgpu_drv.h#L80 [2] https://gitlab.freedesktop.org/virgl/drm-misc-next/-/blob/virtio-gpu-next/drivers/gpu/drm/virtio/virtgpu_hostmem.c#L179 > }; > > Then have helpers to get virtgpu_object_hostmem / virtgpu_object_shmem > from virtgpu_object via container_of which also sanity-check > object_type (maybe we can check drm_gem_object->ops for that instead of > adding a new field). > > > Sending this out to solicit feedback on this approach. Whichever approach > > we decide, landing incremental changes to internal structures is reduces > > rebasing costs and avoids mega-changes. > > That certainly makes sense. > > cheers, > Gerd >
Hi, > > struct virtgpu_object { > > Yeah, using "virtgpu_" rather than "virtio_gpu" makes sense. It wasn't my intention to suggest a rename. It's just that the kernel is a bit inconsistent here and I picked the wrong name here. Most places use virtio_gpu but some use virtgpu (file names, ioctl api). > > struct virtgpu_object_hostmem { > > struct virtgpu_object base; > > {offset, range}; > > (...) > > I'm a kernel newbie, so it's not obvious to me why struct > drm_gem_shmem_object would be a base class for struct > virtgpu_object_hostmem? I think it is easier to just continue using virtio_gpu_object in most places and cast to virtio_gpu_object_{shmem,hostmem} only if needed. Makes it easier to deal with common fields like hw_res_handle. In the hostmem case we would simply not use the drm_gem_shmem_object fields except for drm_gem_shmem_object.base (which is drm_gem_object). > Side question: is drm_gem_object_funcs.vmap(..) / > drm_gem_object_funcs.vunmap(..) even possible for hostmem? Sure. Using ioremap should work, after asking the host to map the object at some location in the pci bar. cheers, Gerd