Message ID | 20191122205734.15925-4-niranjana.vishwanathapura@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/i915/svm: Add SVM support | expand |
Quoting Niranjana Vishwanathapura (2019-11-22 20:57:24) > Shared Virtual Memory (SVM) runtime allocator support allows > binding a shared virtual address to a buffer object (BO) in the > device page table through an ioctl call. The ioctl though is not svm specific, it is to do with "bulk residency" and can be used to reduce execbuf traffic to provide virtual address layout controls to e.g. Vulkan clients. I915_VM_BIND { uint32_t vm_id; int32_t fd; /* or -1 for anon, or buf depending on flags */ uint64_t flags; uint64_t offset; /* offset info fd [page aligned] */ uint64_t length; /* page aligned */ uint64_t iova; /* page aligned */ uint64_t extensions; }; /* where page aligned is actually more I915_GTT_PAGE_ALIGNMENT */ as I recall. I also recall it being part of a future command stream interface to reduce ioctls, but that is another story. -Chris
On Mon, Nov 25, 2019 at 09:59:37AM +0000, Chris Wilson wrote: >Quoting Niranjana Vishwanathapura (2019-11-22 20:57:24) >> Shared Virtual Memory (SVM) runtime allocator support allows >> binding a shared virtual address to a buffer object (BO) in the >> device page table through an ioctl call. > >The ioctl though is not svm specific, it is to do with "bulk residency" >and can be used to reduce execbuf traffic to provide virtual address >layout controls to e.g. Vulkan clients. > >I915_VM_BIND { > uint32_t vm_id; > int32_t fd; /* or -1 for anon, or buf depending on flags */ > uint64_t flags; > uint64_t offset; /* offset info fd [page aligned] */ > uint64_t length; /* page aligned */ > uint64_t iova; /* page aligned */ > uint64_t extensions; >}; /* where page aligned is actually more I915_GTT_PAGE_ALIGNMENT */ > >as I recall. I also recall it being part of a future command stream >interface to reduce ioctls, but that is another story. Thanks Chris. I will change I915_BIND to I915_VM_BIND. Currently, it is only addressing binding SVM system (buffer) and runtime (BOs) allocations. But it can be expanded for other bindings. I have 'type' field instead of 'fd' and 'extensions' & 'iov' can be added later if required. Is that OK? >-Chris
Quoting Niranjan Vishwanathapura (2019-11-25 18:40:57) > On Mon, Nov 25, 2019 at 09:59:37AM +0000, Chris Wilson wrote: > >Quoting Niranjana Vishwanathapura (2019-11-22 20:57:24) > >> Shared Virtual Memory (SVM) runtime allocator support allows > >> binding a shared virtual address to a buffer object (BO) in the > >> device page table through an ioctl call. > > > >The ioctl though is not svm specific, it is to do with "bulk residency" > >and can be used to reduce execbuf traffic to provide virtual address > >layout controls to e.g. Vulkan clients. > > > >I915_VM_BIND { > > uint32_t vm_id; > > int32_t fd; /* or -1 for anon, or buf depending on flags */ > > uint64_t flags; > > uint64_t offset; /* offset info fd [page aligned] */ > > uint64_t length; /* page aligned */ > > uint64_t iova; /* page aligned */ > > uint64_t extensions; > >}; /* where page aligned is actually more I915_GTT_PAGE_ALIGNMENT */ > > > >as I recall. I also recall it being part of a future command stream > >interface to reduce ioctls, but that is another story. > > Thanks Chris. > I will change I915_BIND to I915_VM_BIND. We're very much depending on GEM VM even if BOs wouldn't be used, so this is best called I915_GEM_VM_BIND to match I915_GEM_VM_CREATE and avoid user confusion. > Currently, it is only addressing binding SVM system (buffer) and runtime (BOs) > allocations. But it can be expanded for other bindings. I have 'type' field > instead of 'fd' and 'extensions' & 'iov' can be added later if required. We should try to have the uAPI as final as early as possible. The change process is harder the later it is done, even for RFC :) On the same note, I'm inclined to have I915_SVM_MIGRATE called I915_GEM_VM_PREFAULT from the start, as I already have got some confused questions from folks who mix it with memory pressure initiated migration. And it's inherently racy as anybody could race with it, so PREFAULT would give an impression of that. And I think I915_GEM_VM_PREFAULT would be a generally applicable function, just like I915_GEM_VM_BIND. So we should use a struct member names that are close to I915_GEM_VM_BIND. Better ideas for name to emphasis the nature of what is being done? I915_GEM_VM_FAULT/I915_GEM_VM_{M,}ADVICE... Regards, Joonas > Is that OK? > > >-Chris > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On Tue, Nov 26, 2019 at 03:59:31PM +0200, Joonas Lahtinen wrote: >Quoting Niranjan Vishwanathapura (2019-11-25 18:40:57) >> On Mon, Nov 25, 2019 at 09:59:37AM +0000, Chris Wilson wrote: >> >Quoting Niranjana Vishwanathapura (2019-11-22 20:57:24) >> >> Shared Virtual Memory (SVM) runtime allocator support allows >> >> binding a shared virtual address to a buffer object (BO) in the >> >> device page table through an ioctl call. >> > >> >The ioctl though is not svm specific, it is to do with "bulk residency" >> >and can be used to reduce execbuf traffic to provide virtual address >> >layout controls to e.g. Vulkan clients. >> > >> >I915_VM_BIND { >> > uint32_t vm_id; >> > int32_t fd; /* or -1 for anon, or buf depending on flags */ >> > uint64_t flags; >> > uint64_t offset; /* offset info fd [page aligned] */ >> > uint64_t length; /* page aligned */ >> > uint64_t iova; /* page aligned */ >> > uint64_t extensions; >> >}; /* where page aligned is actually more I915_GTT_PAGE_ALIGNMENT */ >> > >> >as I recall. I also recall it being part of a future command stream >> >interface to reduce ioctls, but that is another story. >> >> Thanks Chris. >> I will change I915_BIND to I915_VM_BIND. > >We're very much depending on GEM VM even if BOs wouldn't be used, >so this is best called I915_GEM_VM_BIND to match I915_GEM_VM_CREATE >and avoid user confusion. > Thanks Joonas. Ok, makes sense. I will make it as such. >> Currently, it is only addressing binding SVM system (buffer) and runtime (BOs) >> allocations. But it can be expanded for other bindings. I have 'type' field >> instead of 'fd' and 'extensions' & 'iov' can be added later if required. > >We should try to have the uAPI as final as early as possible. The >change process is harder the later it is done, even for RFC :) > >On the same note, I'm inclined to have I915_SVM_MIGRATE called >I915_GEM_VM_PREFAULT from the start, as I already have got some >confused questions from folks who mix it with memory pressure >initiated migration. And it's inherently racy as anybody could >race with it, so PREFAULT would give an impression of that. > >And I think I915_GEM_VM_PREFAULT would be a generally applicable >function, just like I915_GEM_VM_BIND. So we should use a struct >member names that are close to I915_GEM_VM_BIND. > >Better ideas for name to emphasis the nature of what is being >done? I915_GEM_VM_FAULT/I915_GEM_VM_{M,}ADVICE... > Though current patchset only supports migrating pages from host to device memory, I intend to support migrating from device to host memory as well with same ioctl. User would want that. Not sure what would be a good name then, _MIGRATE/_PREFETCH/_MOVE? Also, migrating gem objects is currently handled by separate ioctl which is part of LMEM patch series. I am open to merging these ioctls together (similart to VM_BIND) into a single MIGRATE ioctl. Niranjana >Regards, Joonas > >> Is that OK? >> >> >-Chris >> _______________________________________________ >> Intel-gfx mailing list >> Intel-gfx@lists.freedesktop.org >> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Quoting Niranjan Vishwanathapura (2019-11-27 21:23:56) > >We should try to have the uAPI as final as early as possible. The > >change process is harder the later it is done, even for RFC :) > > > >On the same note, I'm inclined to have I915_SVM_MIGRATE called > >I915_GEM_VM_PREFAULT from the start, as I already have got some > >confused questions from folks who mix it with memory pressure > >initiated migration. And it's inherently racy as anybody could > >race with it, so PREFAULT would give an impression of that. > > > >And I think I915_GEM_VM_PREFAULT would be a generally applicable > >function, just like I915_GEM_VM_BIND. So we should use a struct > >member names that are close to I915_GEM_VM_BIND. > > > >Better ideas for name to emphasis the nature of what is being > >done? I915_GEM_VM_FAULT/I915_GEM_VM_{M,}ADVICE... > > > > Though current patchset only supports migrating pages from > host to device memory, I intend to support migrating from device > to host memory as well with same ioctl. User would want that. > Not sure what would be a good name then, _MIGRATE/_PREFETCH/_MOVE? In the HMM concept the CPU access would trigger fault, and trigger the transition, wouldn't it? But you're correct that it is kind of tied to the HMM concept, and may be confusing for BOs. _PREFETCH is a good suggestion for the term, which lead to discussion to avoid explosion of IOCTLs, Chris suggested consolidation, maybe we should have I915_GEM_VM_{M,}ADVISE? If we're looking at connections to fadvise(2), we're basically talking about equivalent of FADV_WILLNEED. That concept would be quite familiar to users. GEM_VM_{M,}ADVISE with WILLNEED and explicitly passing the memory region? Because we can't decipher that from the running thread like CPU. Thoughts? > Also, migrating gem objects is currently handled by separate ioctl > which is part of LMEM patch series. I am open to merging these > ioctls together (similart to VM_BIND) into a single MIGRATE ioctl. The IOCTL in the LMEM series is about setting the allowed backing store types of a BO, not about the residency. There was some discussion around doing explicit migrations by changing that list. Current thinking is that we only need to allow setting it once at creation. That also means it might be convertible to creation time only property. That'd eliminate the need for BO freeze IOCTL that was discussed with Mesa folks. Regard, Joonas
On Thu, Nov 28, 2019 at 02:12:30PM +0200, Joonas Lahtinen wrote: >Quoting Niranjan Vishwanathapura (2019-11-27 21:23:56) >> >We should try to have the uAPI as final as early as possible. The >> >change process is harder the later it is done, even for RFC :) >> > >> >On the same note, I'm inclined to have I915_SVM_MIGRATE called >> >I915_GEM_VM_PREFAULT from the start, as I already have got some >> >confused questions from folks who mix it with memory pressure >> >initiated migration. And it's inherently racy as anybody could >> >race with it, so PREFAULT would give an impression of that. >> > >> >And I think I915_GEM_VM_PREFAULT would be a generally applicable >> >function, just like I915_GEM_VM_BIND. So we should use a struct >> >member names that are close to I915_GEM_VM_BIND. >> > >> >Better ideas for name to emphasis the nature of what is being >> >done? I915_GEM_VM_FAULT/I915_GEM_VM_{M,}ADVICE... >> > >> >> Though current patchset only supports migrating pages from >> host to device memory, I intend to support migrating from device >> to host memory as well with same ioctl. User would want that. >> Not sure what would be a good name then, _MIGRATE/_PREFETCH/_MOVE? > >In the HMM concept the CPU access would trigger fault, and trigger >the transition, wouldn't it? But you're correct that it is kind of >tied to the HMM concept, and may be confusing for BOs. > Yes it does. But I think we should give the user mechanism to explicitly migrate/prefetch it back to system memory. >_PREFETCH is a good suggestion for the term, which lead to >discussion to avoid explosion of IOCTLs, Chris suggested >consolidation, maybe we should have I915_GEM_VM_{M,}ADVISE? > >If we're looking at connections to fadvise(2), we're basically >talking about equivalent of FADV_WILLNEED. That concept would >be quite familiar to users. GEM_VM_{M,}ADVISE with WILLNEED >and explicitly passing the memory region? Because we can't decipher >that from the running thread like CPU. > >Thoughts? Yah it is closer to mbind (instead of nodemask, we specify memory region/s). So, I915_GEM_VM_MBIND? I am ok with _PREFETCH also. > >> Also, migrating gem objects is currently handled by separate ioctl >> which is part of LMEM patch series. I am open to merging these >> ioctls together (similart to VM_BIND) into a single MIGRATE ioctl. > >The IOCTL in the LMEM series is about setting the allowed backing >store types of a BO, not about the residency. There was some >discussion around doing explicit migrations by changing that list. >Current thinking is that we only need to allow setting it once >at creation. That also means it might be convertible to creation >time only property. > >That'd eliminate the need for BO freeze IOCTL that was discussed >with Mesa folks. > Ok. Thanks, Niranjana >Regard, Joonas
diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig index ba9595960bbe..c2e48710eec8 100644 --- a/drivers/gpu/drm/i915/Kconfig +++ b/drivers/gpu/drm/i915/Kconfig @@ -137,6 +137,17 @@ config DRM_I915_GVT_KVMGT Choose this option if you want to enable KVMGT support for Intel GVT-g. +config DRM_I915_SVM + bool "Enable Shared Virtual Memory support in i915" + depends on STAGING + depends on DRM_I915 + default n + help + Choose this option if you want Shared Virtual Memory (SVM) + support in i915. With SVM support, one can share the virtual + address space between a process and the GPU. SVM is supported + on both integrated and discrete Intel GPUs. + menu "drm/i915 Debugging" depends on DRM_I915 depends on EXPERT diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index e0fd10c0cfb8..75fe45633779 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -153,6 +153,9 @@ i915-y += \ intel_region_lmem.o \ intel_wopcm.o +# SVM code +i915-$(CONFIG_DRM_I915_SVM) += gem/i915_gem_svm.o + # general-purpose microcontroller (GuC) support obj-y += gt/uc/ i915-y += gt/uc/intel_uc.o \ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 7a87e8270460..9d43ae6d643a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -2864,10 +2864,14 @@ int i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { + struct drm_i915_gem_exec_object2 *exec2_list, *exec2_list_user; struct drm_i915_gem_execbuffer2 *args = data; - struct drm_i915_gem_exec_object2 *exec2_list; - struct drm_syncobj **fences = NULL; const size_t count = args->buffer_count; + struct drm_syncobj **fences = NULL; + unsigned int i = 0, svm_count = 0; + struct i915_address_space *vm; + struct i915_gem_context *ctx; + struct i915_svm_obj *svm_obj; int err; if (!check_buffer_count(count)) { @@ -2878,15 +2882,46 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, if (!i915_gem_check_execbuffer(args)) return -EINVAL; + ctx = i915_gem_context_lookup(file->driver_priv, args->rsvd1); + if (!ctx || !rcu_access_pointer(ctx->vm)) + return -ENOENT; + + rcu_read_lock(); + vm = i915_vm_get(ctx->vm); + rcu_read_unlock(); + +alloc_again: + svm_count = vm->svm_count; /* Allocate an extra slot for use by the command parser */ - exec2_list = kvmalloc_array(count + 1, eb_element_size(), + exec2_list = kvmalloc_array(count + svm_count + 1, eb_element_size(), __GFP_NOWARN | GFP_KERNEL); if (exec2_list == NULL) { DRM_DEBUG("Failed to allocate exec list for %zd buffers\n", - count); + count + svm_count); return -ENOMEM; } - if (copy_from_user(exec2_list, + mutex_lock(&vm->mutex); + if (svm_count != vm->svm_count) { + mutex_unlock(&vm->mutex); + kvfree(exec2_list); + goto alloc_again; + } + + list_for_each_entry(svm_obj, &vm->svm_list, link) { + memset(&exec2_list[i], 0, sizeof(*exec2_list)); + exec2_list[i].handle = svm_obj->handle; + exec2_list[i].offset = svm_obj->offset; + exec2_list[i].flags = EXEC_OBJECT_PINNED | + EXEC_OBJECT_SUPPORTS_48B_ADDRESS; + i++; + } + exec2_list_user = &exec2_list[i]; + args->buffer_count += svm_count; + mutex_unlock(&vm->mutex); + i915_vm_put(vm); + i915_gem_context_put(ctx); + + if (copy_from_user(exec2_list_user, u64_to_user_ptr(args->buffers_ptr), sizeof(*exec2_list) * count)) { DRM_DEBUG("copy %zd exec entries failed\n", count); @@ -2903,6 +2938,7 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, } err = i915_gem_do_execbuffer(dev, file, args, exec2_list, fences); + args->buffer_count -= svm_count; /* * Now that we have begun execution of the batchbuffer, we ignore @@ -2913,7 +2949,6 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, if (args->flags & __EXEC_HAS_RELOC) { struct drm_i915_gem_exec_object2 __user *user_exec_list = u64_to_user_ptr(args->buffers_ptr); - unsigned int i; /* Copy the new buffer offsets back to the user's exec list. */ /* @@ -2927,13 +2962,14 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, goto end; for (i = 0; i < args->buffer_count; i++) { - if (!(exec2_list[i].offset & UPDATE)) + u64 *offset = &exec2_list_user[i].offset; + + if (!(*offset & UPDATE)) continue; - exec2_list[i].offset = - gen8_canonical_addr(exec2_list[i].offset & PIN_OFFSET_MASK); - unsafe_put_user(exec2_list[i].offset, - &user_exec_list[i].offset, + *offset = gen8_canonical_addr(*offset & + PIN_OFFSET_MASK); + unsafe_put_user(*offset, &user_exec_list[i].offset, end_user); } end_user: diff --git a/drivers/gpu/drm/i915/gem/i915_gem_svm.c b/drivers/gpu/drm/i915/gem/i915_gem_svm.c new file mode 100644 index 000000000000..973070056726 --- /dev/null +++ b/drivers/gpu/drm/i915/gem/i915_gem_svm.c @@ -0,0 +1,51 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include "i915_drv.h" +#include "i915_gem_gtt.h" + +int i915_gem_svm_bind(struct i915_address_space *vm, + struct drm_i915_bind *args, + struct drm_file *file) +{ + struct i915_svm_obj *svm_obj, *tmp; + struct drm_i915_gem_object *obj; + int ret = 0; + + obj = i915_gem_object_lookup(file, args->handle); + if (!obj) + return -ENOENT; + + /* FIXME: Need to handle case with unending batch buffers */ + if (!(args->flags & I915_BIND_UNBIND)) { + svm_obj = kmalloc(sizeof(*svm_obj), GFP_KERNEL); + if (!svm_obj) { + ret = -ENOMEM; + goto put_obj; + } + svm_obj->handle = args->handle; + svm_obj->offset = args->start; + } + + mutex_lock(&vm->mutex); + if (!(args->flags & I915_BIND_UNBIND)) { + list_add(&svm_obj->link, &vm->svm_list); + vm->svm_count++; + } else { + list_for_each_entry_safe(svm_obj, tmp, &vm->svm_list, link) { + if (svm_obj->handle != args->handle) + continue; + + list_del_init(&svm_obj->link); + vm->svm_count--; + kfree(svm_obj); + break; + } + } + mutex_unlock(&vm->mutex); +put_obj: + i915_gem_object_put(obj); + return ret; +} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_svm.h b/drivers/gpu/drm/i915/gem/i915_gem_svm.h new file mode 100644 index 000000000000..c394542dba75 --- /dev/null +++ b/drivers/gpu/drm/i915/gem/i915_gem_svm.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __I915_GEM_SVM_H +#define __I915_GEM_SVM_H + +#include "i915_drv.h" + +#if defined(CONFIG_DRM_I915_SVM) +int i915_gem_svm_bind(struct i915_address_space *vm, + struct drm_i915_bind *args, + struct drm_file *file); +#else +static inline int i915_gem_svm_bind(struct i915_address_space *vm, + struct drm_i915_bind *args, + struct drm_file *file) +{ return -ENOTSUPP; } +#endif + +#endif /* __I915_GEM_SVM_H */ diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 4defadb26e7e..9c525d3f694c 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -61,6 +61,7 @@ #include "gem/i915_gem_context.h" #include "gem/i915_gem_ioctls.h" +#include "gem/i915_gem_svm.h" #include "gt/intel_gt.h" #include "gt/intel_gt_pm.h" #include "gt/intel_rc6.h" @@ -2687,6 +2688,26 @@ i915_gem_reject_pin_ioctl(struct drm_device *dev, void *data, return -ENODEV; } +static int i915_bind_ioctl(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_i915_bind *args = data; + struct i915_address_space *vm; + int ret = -EINVAL; + + vm = i915_gem_address_space_lookup(file->driver_priv, args->vm_id); + if (unlikely(!vm)) + return -ENOENT; + + switch (args->type) { + case I915_BIND_SVM_GEM_OBJ: + ret = i915_gem_svm_bind(vm, args, file); + } + + i915_vm_put(vm); + return ret; +} + static const struct drm_ioctl_desc i915_ioctls[] = { DRM_IOCTL_DEF_DRV(I915_INIT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF_DRV(I915_FLUSH, drm_noop, DRM_AUTH), @@ -2746,6 +2767,7 @@ static const struct drm_ioctl_desc i915_ioctls[] = { DRM_IOCTL_DEF_DRV(I915_QUERY, i915_query_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_VM_CREATE, i915_gem_vm_create_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_VM_DESTROY, i915_gem_vm_destroy_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(I915_BIND, i915_bind_ioctl, DRM_RENDER_ALLOW), }; static struct drm_driver driver = { diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index bbf4dfdfa8ba..f7051e6df656 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1911,6 +1911,28 @@ i915_gem_context_lookup(struct drm_i915_file_private *file_priv, u32 id) return ctx; } +static inline struct i915_address_space * +__i915_gem_address_space_lookup_rcu(struct drm_i915_file_private *file_priv, + u32 id) +{ + return idr_find(&file_priv->vm_idr, id); +} + +static inline struct i915_address_space * +i915_gem_address_space_lookup(struct drm_i915_file_private *file_priv, + u32 id) +{ + struct i915_address_space *vm; + + rcu_read_lock(); + vm = __i915_gem_address_space_lookup_rcu(file_priv, id); + if (vm) + vm = i915_vm_get(vm); + rcu_read_unlock(); + + return vm; +} + /* i915_gem_evict.c */ int __must_check i915_gem_evict_something(struct i915_address_space *vm, u64 min_size, u64 alignment, diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 6239a9adbf14..44ff4074db12 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -574,6 +574,7 @@ static void i915_address_space_init(struct i915_address_space *vm, int subclass) stash_init(&vm->free_pages); INIT_LIST_HEAD(&vm->bound_list); + INIT_LIST_HEAD(&vm->svm_list); } static int __setup_page_dma(struct i915_address_space *vm, diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h index 402283ce2864..d618a5787c61 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.h +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h @@ -285,6 +285,13 @@ struct pagestash { struct pagevec pvec; }; +struct i915_svm_obj { + /** This obj's place in the SVM object list */ + struct list_head link; + u32 handle; + u64 offset; +}; + struct i915_address_space { struct kref ref; struct rcu_work rcu; @@ -329,6 +336,12 @@ struct i915_address_space { */ struct list_head bound_list; + /** + * List of SVM bind objects. + */ + struct list_head svm_list; + unsigned int svm_count; + struct pagestash free_pages; /* Global GTT */
Shared Virtual Memory (SVM) runtime allocator support allows binding a shared virtual address to a buffer object (BO) in the device page table through an ioctl call. Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Sudeep Dutt <sudeep.dutt@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> --- drivers/gpu/drm/i915/Kconfig | 11 ++++ drivers/gpu/drm/i915/Makefile | 3 + .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 58 +++++++++++++++---- drivers/gpu/drm/i915/gem/i915_gem_svm.c | 51 ++++++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_svm.h | 22 +++++++ drivers/gpu/drm/i915/i915_drv.c | 22 +++++++ drivers/gpu/drm/i915/i915_drv.h | 22 +++++++ drivers/gpu/drm/i915/i915_gem_gtt.c | 1 + drivers/gpu/drm/i915/i915_gem_gtt.h | 13 +++++ 9 files changed, 192 insertions(+), 11 deletions(-) create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_svm.c create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_svm.h