diff mbox

[2/3] drm/i915: Introduce a new create ioctl for user specified placement

Message ID 1402932546-16653-3-git-send-email-sourab.gupta@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

sourab.gupta@intel.com June 16, 2014, 3:29 p.m. UTC
From: Chris Wilson <chris@chris-wilson.co.uk>

Despite being a unified memory architecture (UMA) some bits of memory
are more equal than others. In particular we have the thorny issue of
stolen memory, memory stolen from the system by the BIOS and reserved
for igfx use. Stolen memory is required for some functions of the GPU
and display engine, but in general it goes wasted. Whilst we cannot
return it back to the system, we need to find some other method for
utilising it. As we do not support direct access to the physical address
in the stolen region, it behaves like a different class of memory,
closer in kin to local GPU memory. This strongly suggests that we need a
placement model like TTM if we are to fully utilize these discrete
chunks of differing memory.

This new create ioctl therefore exists to allow the user to create these
second class buffer objects from stolen memory. At the moment direct
access by the CPU through mmaps and pread/pwrite are verboten on the
objects, and so the user must be aware of the limitations of the objects
created. Yet, those limitations rarely reduce the desired functionality
in many use cases and so the user should be able to easily fill the
stolen memory and so help to reduce overall memory pressure.

The most obvious use case for stolen memory is for the creation of objects
for the display engine which already have very similar restrictions on
access. However, we want a reasonably general ioctl in order to cater
for diverse scenarios beyond the author's imagination.

v2: Expand the struct slightly to include cache domains, and ensure that
all memory allocated by the kernel for userspace is zeroed.

v3: Ben suggested the idea of binding the object at a known offset into
the target context upon creation.

testcase: igt/gem_create2

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>
---
 drivers/gpu/drm/i915/i915_dma.c        |  11 +-
 drivers/gpu/drm/i915/i915_drv.h        |  15 ++-
 drivers/gpu/drm/i915/i915_gem.c        | 208 ++++++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_gem_tiling.c | 106 +++++++++--------
 include/uapi/drm/i915_drm.h            | 107 +++++++++++++++++
 5 files changed, 373 insertions(+), 74 deletions(-)

Comments

Lespiau, Damien June 16, 2014, 3:39 p.m. UTC | #1
On Mon, Jun 16, 2014 at 08:59:05PM +0530, sourab.gupta@intel.com wrote:
> From: Chris Wilson <chris@chris-wilson.co.uk>
> 
> Despite being a unified memory architecture (UMA) some bits of memory
> are more equal than others. In particular we have the thorny issue of
> stolen memory, memory stolen from the system by the BIOS and reserved
> for igfx use. Stolen memory is required for some functions of the GPU
> and display engine, but in general it goes wasted. Whilst we cannot
> return it back to the system, we need to find some other method for
> utilising it. As we do not support direct access to the physical address
> in the stolen region, it behaves like a different class of memory,
> closer in kin to local GPU memory. This strongly suggests that we need a
> placement model like TTM if we are to fully utilize these discrete
> chunks of differing memory.
> 
> This new create ioctl therefore exists to allow the user to create these
> second class buffer objects from stolen memory. At the moment direct
> access by the CPU through mmaps and pread/pwrite are verboten on the
> objects, and so the user must be aware of the limitations of the objects
> created. Yet, those limitations rarely reduce the desired functionality
> in many use cases and so the user should be able to easily fill the
> stolen memory and so help to reduce overall memory pressure.
> 
> The most obvious use case for stolen memory is for the creation of objects
> for the display engine which already have very similar restrictions on
> access. However, we want a reasonably general ioctl in order to cater
> for diverse scenarios beyond the author's imagination.
> 
> v2: Expand the struct slightly to include cache domains, and ensure that
> all memory allocated by the kernel for userspace is zeroed.
> 
> v3: Ben suggested the idea of binding the object at a known offset into
> the target context upon creation.
> 
> testcase: igt/gem_create2
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>

What about those ioctls() losing  DRM_AUTH?
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index aadb0c9..f101d49 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -1996,10 +1996,10 @@  const struct drm_ioctl_desc i915_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER2, i915_gem_execbuffer2, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_PIN, i915_gem_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY|DRM_UNLOCKED),
 	DRM_IOCTL_DEF_DRV(I915_GEM_UNPIN, i915_gem_unpin_ioctl, DRM_AUTH|DRM_ROOT_ONLY|DRM_UNLOCKED),
-	DRM_IOCTL_DEF_DRV(I915_GEM_BUSY, i915_gem_busy_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(I915_GEM_BUSY, i915_gem_busy_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_SET_CACHING, i915_gem_set_caching_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_GET_CACHING, i915_gem_get_caching_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
-	DRM_IOCTL_DEF_DRV(I915_GEM_THROTTLE, i915_gem_throttle_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(I915_GEM_THROTTLE, i915_gem_throttle_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_ENTERVT, i915_gem_entervt_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY|DRM_UNLOCKED),
 	DRM_IOCTL_DEF_DRV(I915_GEM_LEAVEVT, i915_gem_leavevt_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY|DRM_UNLOCKED),
 	DRM_IOCTL_DEF_DRV(I915_GEM_CREATE, i915_gem_create_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
@@ -2009,8 +2009,8 @@  const struct drm_ioctl_desc i915_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(I915_GEM_MMAP_GTT, i915_gem_mmap_gtt_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_SET_DOMAIN, i915_gem_set_domain_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_SW_FINISH, i915_gem_sw_finish_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
-	DRM_IOCTL_DEF_DRV(I915_GEM_SET_TILING, i915_gem_set_tiling, DRM_UNLOCKED|DRM_RENDER_ALLOW),
-	DRM_IOCTL_DEF_DRV(I915_GEM_GET_TILING, i915_gem_get_tiling, DRM_UNLOCKED|DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(I915_GEM_SET_TILING, i915_gem_set_tiling_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(I915_GEM_GET_TILING, i915_gem_get_tiling_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_GET_APERTURE, i915_gem_get_aperture_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GET_PIPE_FROM_CRTC_ID, intel_get_pipe_from_crtc_id, DRM_UNLOCKED),
 	DRM_IOCTL_DEF_DRV(I915_GEM_MADVISE, i915_gem_madvise_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
@@ -2018,12 +2018,13 @@  const struct drm_ioctl_desc i915_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(I915_OVERLAY_ATTRS, intel_overlay_attrs, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
 	DRM_IOCTL_DEF_DRV(I915_SET_SPRITE_COLORKEY, intel_sprite_set_colorkey, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
 	DRM_IOCTL_DEF_DRV(I915_GET_SPRITE_COLORKEY, intel_sprite_get_colorkey, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
-	DRM_IOCTL_DEF_DRV(I915_GEM_WAIT, i915_gem_wait_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(I915_GEM_WAIT, i915_gem_wait_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_CREATE, i915_gem_context_create_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_DESTROY, i915_gem_context_destroy_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_REG_READ, i915_reg_read_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GET_RESET_STATS, i915_get_reset_stats_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_USERPTR, i915_gem_userptr_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(I915_GEM_CREATE2, i915_gem_create2_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
 };
 
 int i915_max_ioctl = ARRAY_SIZE(i915_ioctls);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0cdd4d7..b097d6e 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2104,6 +2104,8 @@  int i915_gem_init_ioctl(struct drm_device *dev, void *data,
 			struct drm_file *file_priv);
 int i915_gem_create_ioctl(struct drm_device *dev, void *data,
 			  struct drm_file *file_priv);
+int i915_gem_create2_ioctl(struct drm_device *dev, void *data,
+			   struct drm_file *file_priv);
 int i915_gem_pread_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file_priv);
 int i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
@@ -2138,10 +2140,10 @@  int i915_gem_entervt_ioctl(struct drm_device *dev, void *data,
 			   struct drm_file *file_priv);
 int i915_gem_leavevt_ioctl(struct drm_device *dev, void *data,
 			   struct drm_file *file_priv);
-int i915_gem_set_tiling(struct drm_device *dev, void *data,
-			struct drm_file *file_priv);
-int i915_gem_get_tiling(struct drm_device *dev, void *data,
-			struct drm_file *file_priv);
+int i915_gem_set_tiling_ioctl(struct drm_device *dev, void *data,
+			      struct drm_file *file_priv);
+int i915_gem_get_tiling_ioctl(struct drm_device *dev, void *data,
+			      struct drm_file *file_priv);
 int i915_gem_init_userptr(struct drm_device *dev);
 int i915_gem_userptr_ioctl(struct drm_device *dev, void *data,
 			   struct drm_file *file);
@@ -2264,6 +2266,8 @@  static inline bool i915_stop_ring_allow_warn(struct drm_i915_private *dev_priv)
 
 void i915_gem_reset(struct drm_device *dev);
 bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, bool force);
+int __must_check i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
+					    int tiling_mode, int pitch);
 int __must_check i915_gem_object_finish_gpu(struct drm_i915_gem_object *obj);
 int __must_check i915_gem_init(struct drm_device *dev);
 int __must_check i915_gem_init_hw(struct drm_device *dev);
@@ -2296,6 +2300,9 @@  int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj,
 int i915_gem_open(struct drm_device *dev, struct drm_file *file);
 void i915_gem_release(struct drm_device *dev, struct drm_file *file);
 
+bool
+i915_tiling_ok(struct drm_device *dev, int stride, int size, int tiling_mode);
+
 uint32_t
 i915_gem_get_gtt_size(struct drm_device *dev, uint32_t size, int tiling_mode);
 uint32_t
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1794a04..16543c2 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -53,6 +53,14 @@  static void i915_gem_object_update_fence(struct drm_i915_gem_object *obj,
 					 struct drm_i915_fence_reg *fence,
 					 bool enable);
 
+#define PIN_OFFSET_VALID 0x1
+static struct i915_vma *
+i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
+			   struct i915_address_space *vm,
+			   uint64_t offset,
+			   unsigned alignment,
+			   uint64_t flags);
+
 static unsigned long i915_gem_shrinker_count(struct shrinker *shrinker,
 					     struct shrink_control *sc);
 static unsigned long i915_gem_shrinker_scan(struct shrinker *shrinker,
@@ -380,8 +388,7 @@  i915_gem_dumb_create(struct drm_file *file,
 	/* have to work out size/pitch and return them */
 	args->pitch = ALIGN(args->width * DIV_ROUND_UP(args->bpp, 8), 64);
 	args->size = args->pitch * args->height;
-	return i915_gem_create(file, dev,
-			       args->size, &args->handle);
+	return i915_gem_create(file, dev, args->size, &args->handle);
 }
 
 /**
@@ -392,9 +399,155 @@  i915_gem_create_ioctl(struct drm_device *dev, void *data,
 		      struct drm_file *file)
 {
 	struct drm_i915_gem_create *args = data;
+	return i915_gem_create(file, dev, args->size, &args->handle);
+}
+
+int
+i915_gem_create2_ioctl(struct drm_device *dev, void *data,
+		       struct drm_file *file)
+{
+	struct drm_i915_gem_create2 *args = data;
+	struct drm_i915_gem_object *obj;
+	unsigned cache_level;
+	enum {
+		ASYNC_CLEAR = 0x1,
+	} flags = 0;
+	int ret;
+
+	if (args->pad)
+		return -EINVAL;
+
+	if (args->flags & ~(0))
+		return -EINVAL;
+
+	if (!i915_tiling_ok(dev, args->stride, args->size, args->tiling_mode))
+		return -EINVAL;
+
+	switch (args->domain) {
+	case 0:
+	case I915_GEM_DOMAIN_CPU:
+	case I915_GEM_DOMAIN_GTT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (args->caching) {
+	case I915_CACHING_NONE:
+		cache_level = I915_CACHE_NONE;
+		break;
+	case I915_CACHING_CACHED:
+		cache_level = I915_CACHE_LLC;
+		break;
+	case I915_CACHING_DISPLAY:
+		cache_level = HAS_WT(dev) ? I915_CACHE_WT : I915_CACHE_NONE;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	switch (args->madvise) {
+	case I915_MADV_DONTNEED:
+	case I915_MADV_WILLNEED:
+	    break;
+	default:
+	    return -EINVAL;
+	}
+
+	if (args->size == 0 || args->size & 4095)
+		return -EINVAL;
+
+	ret = i915_mutex_lock_interruptible(dev);
+	if (ret)
+		return ret;
+
+	obj = NULL;
+	switch (args->placement) {
+	case I915_CREATE_PLACEMENT_SYSTEM:
+		obj = i915_gem_alloc_object(dev, args->size);
+		break;
+	case I915_CREATE_PLACEMENT_STOLEN:
+		obj = i915_gem_object_create_stolen(dev, args->size);
+		flags |= ASYNC_CLEAR;
+		break;
+	default:
+		ret = -EINVAL;
+		goto unlock;
+	}
+	if (obj == NULL) {
+		ret = -ENOMEM;
+		goto unlock;
+	}
+
+	ret =  i915_gem_object_set_cache_level(obj, cache_level);
+	if (ret)
+		goto err;
+
+	ret = i915_gem_object_set_tiling(obj, args->tiling_mode, args->stride);
+	if (ret)
+		goto err;
 
-	return i915_gem_create(file, dev,
-			       args->size, &args->handle);
+	if (args->offset & I915_CREATE_OFFSET_VALID) {
+		struct intel_context *ctx;
+		struct i915_vma *vma;
+
+		ctx = i915_gem_context_get(file->driver_priv, args->context);
+		if (IS_ERR(ctx)) {
+			ret = PTR_ERR(ctx);
+			goto err;
+		}
+
+		vma = i915_gem_obj_to_vma(obj, ctx->vm);
+		if (vma && drm_mm_node_allocated(&vma->node)) {
+			if (vma->node.start != (args->offset &
+						~I915_CREATE_OFFSET_VALID)) {
+				ret = i915_vma_unbind(vma);
+				if (ret)
+					goto err;
+
+				vma = NULL;
+			}
+		}
+
+		if (vma == NULL || !drm_mm_node_allocated(&vma->node)) {
+			vma = i915_gem_object_bind_to_vm(obj, ctx->vm,
+					args->offset, 0, flags);
+			if (IS_ERR(vma)) {
+				ret = PTR_ERR(vma);
+				goto err;
+			}
+		}
+	}
+
+	if (flags & ASYNC_CLEAR) {
+		ret = i915_gem_exec_clear_object(obj);
+		if (ret)
+			goto err;
+	}
+
+	if (args->domain) {
+		if (args->domain == I915_GEM_DOMAIN_GTT) {
+			ret = i915_gem_object_set_to_gtt_domain(obj, true);
+			if (ret == -EINVAL) /* unbound */
+				ret = 0;
+		} else {
+			ret = i915_gem_object_set_to_cpu_domain(obj, true);
+		}
+		if (ret)
+			goto err;
+	}
+
+	ret = drm_gem_handle_create(file, &obj->base, &args->handle);
+	if (ret)
+		goto err;
+
+	obj->madv = args->madvise;
+	trace_i915_gem_object_create(obj);
+err:
+	drm_gem_object_unreference(&obj->base);
+unlock:
+	mutex_unlock(&dev->struct_mutex);
+	return ret;
 }
 
 static inline int
@@ -3383,6 +3536,7 @@  static void i915_gem_verify_gtt(struct drm_device *dev)
 static struct i915_vma *
 i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
 			   struct i915_address_space *vm,
+			   uint64_t offset,
 			   unsigned alignment,
 			   uint64_t flags)
 {
@@ -3438,22 +3592,38 @@  i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
 	if (IS_ERR(vma))
 		goto err_unpin;
 
+	if (offset & PIN_OFFSET_VALID) {
+		offset &= ~PIN_OFFSET_VALID;
+		if (alignment && offset & (alignment - 1)) {
+			vma = ERR_PTR(-EINVAL);
+			goto err_free_vma;
+		}
+
+		vma->node.start = offset;
+		vma->node.size = size;
+		vma->node.color = obj->cache_level;
+		ret = drm_mm_reserve_node(&vm->mm, &vma->node);
+		if (ret) {
+			vma = ERR_PTR(ret);
+			goto err_free_vma;
+		}
+	} else {
 search_free:
-	ret = drm_mm_insert_node_in_range_generic(&vm->mm, &vma->node,
-						  size, alignment,
-						  obj->cache_level,
-						  start, end,
-						  DRM_MM_SEARCH_DEFAULT,
-						  DRM_MM_CREATE_DEFAULT);
-	if (ret) {
-		ret = i915_gem_evict_something(dev, vm, size, alignment,
-					       obj->cache_level,
-					       start, end,
-					       flags);
-		if (ret == 0)
-			goto search_free;
+		ret = drm_mm_insert_node_in_range_generic(&vm->mm, &vma->node,
+				size, alignment,
+				obj->cache_level,
+				start, end,
+				DRM_MM_SEARCH_DEFAULT,
+				DRM_MM_CREATE_DEFAULT);
+		if (ret) {
+			ret = i915_gem_evict_something(dev, vm, size, alignment,
+						obj->cache_level,
+						start, end, flags);
+			if (ret == 0)
+				goto search_free;
 
-		goto err_free_vma;
+			goto err_free_vma;
+		}
 	}
 	if (WARN_ON(!i915_gem_valid_gtt_space(dev, &vma->node,
 					      obj->cache_level))) {
@@ -4081,7 +4251,7 @@  i915_gem_object_pin(struct drm_i915_gem_object *obj,
 	}
 
 	if (vma == NULL || !drm_mm_node_allocated(&vma->node)) {
-		vma = i915_gem_object_bind_to_vm(obj, vm, alignment, flags);
+		vma = i915_gem_object_bind_to_vm(obj, vm, 0, alignment, flags);
 		if (IS_ERR(vma))
 			return PTR_ERR(vma);
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index cb150e8..683e0853 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -201,7 +201,7 @@  i915_gem_detect_bit_6_swizzle(struct drm_device *dev)
 }
 
 /* Check pitch constriants for all chips & tiling formats */
-static bool
+bool
 i915_tiling_ok(struct drm_device *dev, int stride, int size, int tiling_mode)
 {
 	int tile_width;
@@ -285,12 +285,68 @@  i915_gem_object_fence_ok(struct drm_i915_gem_object *obj, int tiling_mode)
 	return true;
 }
 
+int
+i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
+			   int tiling_mode, int stride)
+{
+	struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
+	int ret;
+
+	if (tiling_mode == obj->tiling_mode && stride == obj->stride)
+		return 0;
+
+	/* We need to rebind the object if its current allocation
+	 * no longer meets the alignment restrictions for its new
+	 * tiling mode. Otherwise we can just leave it alone, but
+	 * need to ensure that any fence register is updated before
+	 * the next fenced (either through the GTT or by the BLT unit
+	 * on older GPUs) access.
+	 *
+	 * After updating the tiling parameters, we then flag whether
+	 * we need to update an associated fence register. Note this
+	 * has to also include the unfenced register the GPU uses
+	 * whilst executing a fenced command for an untiled object.
+	 */
+
+	obj->map_and_fenceable =
+		!i915_gem_obj_ggtt_bound(obj) ||
+		((i915_gem_obj_ggtt_offset(obj) + obj->base.size)
+		 <= dev_priv->gtt.mappable_end &&
+		 i915_gem_object_fence_ok(obj, tiling_mode));
+
+	/* Rebind if we need a change of alignment */
+	ret = 0;
+	if (!obj->map_and_fenceable) {
+		u32 unfenced_alignment =
+			i915_gem_get_gtt_alignment(dev_priv->dev,
+						   obj->base.size, tiling_mode,
+						   false);
+		if (i915_gem_obj_ggtt_offset(obj) & (unfenced_alignment - 1))
+			ret = i915_gem_object_ggtt_unbind(obj);
+	}
+
+	if (ret == 0) {
+		obj->fence_dirty =
+			obj->fenced_gpu_access ||
+			obj->fence_reg != I915_FENCE_REG_NONE;
+
+		obj->tiling_mode = tiling_mode;
+		obj->stride = stride;
+
+		/* Force the fence to be reacquired for GTT access */
+		i915_gem_release_mmap(obj);
+	}
+
+	return ret;
+}
+
+
 /**
  * Sets the tiling mode of an object, returning the required swizzling of
  * bit 6 of addresses in the object.
  */
 int
-i915_gem_set_tiling(struct drm_device *dev, void *data,
+i915_gem_set_tiling_ioctl(struct drm_device *dev, void *data,
 		   struct drm_file *file)
 {
 	struct drm_i915_gem_set_tiling *args = data;
@@ -343,49 +399,7 @@  i915_gem_set_tiling(struct drm_device *dev, void *data,
 	}
 
 	mutex_lock(&dev->struct_mutex);
-	if (args->tiling_mode != obj->tiling_mode ||
-	    args->stride != obj->stride) {
-		/* We need to rebind the object if its current allocation
-		 * no longer meets the alignment restrictions for its new
-		 * tiling mode. Otherwise we can just leave it alone, but
-		 * need to ensure that any fence register is updated before
-		 * the next fenced (either through the GTT or by the BLT unit
-		 * on older GPUs) access.
-		 *
-		 * After updating the tiling parameters, we then flag whether
-		 * we need to update an associated fence register. Note this
-		 * has to also include the unfenced register the GPU uses
-		 * whilst executing a fenced command for an untiled object.
-		 */
-
-		obj->map_and_fenceable =
-			!i915_gem_obj_ggtt_bound(obj) ||
-			(i915_gem_obj_ggtt_offset(obj) +
-			 obj->base.size <= dev_priv->gtt.mappable_end &&
-			 i915_gem_object_fence_ok(obj, args->tiling_mode));
-
-		/* Rebind if we need a change of alignment */
-		if (!obj->map_and_fenceable) {
-			u32 unfenced_align =
-				i915_gem_get_gtt_alignment(dev, obj->base.size,
-							    args->tiling_mode,
-							    false);
-			if (i915_gem_obj_ggtt_offset(obj) & (unfenced_align - 1))
-				ret = i915_gem_object_ggtt_unbind(obj);
-		}
-
-		if (ret == 0) {
-			obj->fence_dirty =
-				obj->fenced_gpu_access ||
-				obj->fence_reg != I915_FENCE_REG_NONE;
-
-			obj->tiling_mode = args->tiling_mode;
-			obj->stride = args->stride;
-
-			/* Force the fence to be reacquired for GTT access */
-			i915_gem_release_mmap(obj);
-		}
-	}
+	ret = i915_gem_object_set_tiling(obj, args->tiling_mode, args->stride);
 	/* we have to maintain this existing ABI... */
 	args->stride = obj->stride;
 	args->tiling_mode = obj->tiling_mode;
@@ -411,7 +425,7 @@  i915_gem_set_tiling(struct drm_device *dev, void *data,
  * Returns the current tiling mode and required bit 6 swizzling for the object.
  */
 int
-i915_gem_get_tiling(struct drm_device *dev, void *data,
+i915_gem_get_tiling_ioctl(struct drm_device *dev, void *data,
 		   struct drm_file *file)
 {
 	struct drm_i915_gem_get_tiling *args = data;
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index ff57f07..7cf2382 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -224,6 +224,7 @@  typedef struct _drm_i915_sarea {
 #define DRM_I915_REG_READ		0x31
 #define DRM_I915_GET_RESET_STATS	0x32
 #define DRM_I915_GEM_USERPTR		0x33
+#define DRM_I915_GEM_CREATE2		0x34
 
 #define DRM_IOCTL_I915_INIT		DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t)
 #define DRM_IOCTL_I915_FLUSH		DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLUSH)
@@ -254,6 +255,7 @@  typedef struct _drm_i915_sarea {
 #define DRM_IOCTL_I915_GEM_ENTERVT	DRM_IO(DRM_COMMAND_BASE + DRM_I915_GEM_ENTERVT)
 #define DRM_IOCTL_I915_GEM_LEAVEVT	DRM_IO(DRM_COMMAND_BASE + DRM_I915_GEM_LEAVEVT)
 #define DRM_IOCTL_I915_GEM_CREATE	DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_CREATE, struct drm_i915_gem_create)
+#define DRM_IOCTL_I915_GEM_CREATE2	DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_CREATE2, struct drm_i915_gem_create2)
 #define DRM_IOCTL_I915_GEM_PREAD	DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_PREAD, struct drm_i915_gem_pread)
 #define DRM_IOCTL_I915_GEM_PWRITE	DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_PWRITE, struct drm_i915_gem_pwrite)
 #define DRM_IOCTL_I915_GEM_MMAP		DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_MMAP, struct drm_i915_gem_mmap)
@@ -437,6 +439,111 @@  struct drm_i915_gem_create {
 	__u32 pad;
 };
 
+struct drm_i915_gem_create2 {
+	/**
+	 * Requested size for the object.
+	 *
+	 * The (page-aligned) allocated size for the object will be returned.
+	 */
+	__u64 size;
+
+	/**
+	 * Requested offset for the object.
+	 *
+	 * Can be used for "soft-pinning" the object into the per-process
+	 * GTT of the target context upon creation. Only possible if using
+	 * contexts and per-process GTTs.
+	 *
+	 * The address must be page-aligned, and have the valid bit set.
+	 */
+	__u64 offset;
+#define I915_CREATE_OFFSET_VALID (1<<0)
+
+	/**
+	 * Target context of the object.
+	 *
+	 * The context of the object can be used for setting the initial offset
+	 * of the object in the per-process GTT.
+	 */
+	__u32 context;
+
+	/**
+	 * Requested placement (which memory domain)
+	 *
+	 * You can request that the object be created from special memory
+	 * rather than regular system pages. Such irregular objects may
+	 * have certain restrictions (such as CPU access to a stolen
+	 * object is verboten).
+	 */
+	__u32 placement;
+#define I915_CREATE_PLACEMENT_SYSTEM 0
+#define I915_CREATE_PLACEMENT_STOLEN 1 /* Cannot use CPU mmaps or pread/pwrite */
+	/**
+	 * Requested domain (which cache domain)
+	 *
+	 * You can request that the object be created from memory in a
+	 * certain cache domain (such as RENDER, CPU or GTT). In some cases,
+	 * this then may allocate from a pool of such pages to avoid any
+	 * migration overhead, but it is always equivalent to performing
+	 * an explicit set-domain(read=DOMAIN, write=DOMAIN) on the
+	 * constructed object.
+	 *
+	 * Set to 0, to leave the initial domain unspecified and defaulting
+	 * to the domain set by the constructor.
+	 *
+	 * See DRM_IOCTL_I915_GEM_SET_DOMAIN
+	 */
+	__u32 domain;
+
+	/**
+	 * Requested cache level.
+	 *
+	 * See DRM_IOCTL_I915_GEM_SET_CACHING
+	 */
+	__u32 caching;
+
+	/**
+	 * Requested tiling mode.
+	 *
+	 * See DRM_IOCTL_I915_GEM_SET_TILING
+	 */
+	__u32 tiling_mode;
+	/**
+	 * Requested stride for tiling.
+	 *
+	 * See DRM_IOCTL_I915_GEM_SET_TILING
+	 */
+	__u32 stride;
+
+	/**
+	 * Requested madvise priority.
+	 *
+	 * See DRM_IOCTL_I915_GEM_MADVISE
+	 */
+	__u32 madvise;
+
+	/**
+	 * Additional miscellaneous flags
+	 *
+	 * Reserved for future use, must be zero.
+	 */
+	__u32 flags;
+
+	/**
+	 * Padding for 64-bit struct alignment.
+	 *
+	 * Reserved for future use, must be zero.
+	 */
+	__u32 pad;
+
+	/**
+	 * Returned handle for the object.
+	 *
+	 * Object handles are nonzero.
+	 */
+	__u32 handle;
+};
+
 struct drm_i915_gem_pread {
 	/** Handle for the object being read. */
 	__u32 handle;