diff mbox

[v4,1/3] drm/i915: simplify allocation of driver-internal requests

Message ID 1453230175-19330-2-git-send-email-david.s.gordon@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Dave Gordon Jan. 19, 2016, 7:02 p.m. UTC
There are a number of places where the driver needs a request, but isn't
working on behalf of any specific user or in a specific context. At
present, we associate them with the per-engine default context. A future
patch will abolish those per-engine context pointers; but we can already
eliminate a lot of the references to them, just by making the allocator
allow NULL as a shorthand for "an appropriate context for this ring",
which will mean that the callers don't need to know anything about how
the "appropriate context" is found (e.g. per-ring vs per-device, etc).

So this patch renames the existing i915_gem_request_alloc(), and makes
it local (static inline), and replaces it with a wrapper that provides
a default if the context is NULL, and also has a nicer calling
convention (doesn't require a pointer to an output parameter). Then we
change all callers to use the new convention:
OLD:
	err = i915_gem_request_alloc(ring, user_ctx, &req);
	if (err) ...
NEW:
	req = i915_gem_request_alloc(ring, user_ctx);
	if (IS_ERR(req)) ...
OLD:
	err = i915_gem_request_alloc(ring, ring->default_context, &req);
	if (err) ...
NEW:
	req = i915_gem_request_alloc(ring, NULL);
	if (IS_ERR(req)) ...

v4:	Rebased

Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Reviewed-by: Nick Hoath <nicholas.hoath@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h            |  6 ++--
 drivers/gpu/drm/i915/i915_gem.c            | 55 +++++++++++++++++++++++-------
 drivers/gpu/drm/i915/i915_gem_execbuffer.c | 14 +++++---
 drivers/gpu/drm/i915/intel_display.c       |  6 ++--
 drivers/gpu/drm/i915/intel_lrc.c           |  9 +++--
 drivers/gpu/drm/i915/intel_overlay.c       | 24 ++++++-------
 6 files changed, 74 insertions(+), 40 deletions(-)

Comments

Tvrtko Ursulin Jan. 20, 2016, 9:56 a.m. UTC | #1
Hi,

On 19/01/16 19:02, Dave Gordon wrote:
> There are a number of places where the driver needs a request, but isn't
> working on behalf of any specific user or in a specific context. At
> present, we associate them with the per-engine default context. A future
> patch will abolish those per-engine context pointers; but we can already
> eliminate a lot of the references to them, just by making the allocator
> allow NULL as a shorthand for "an appropriate context for this ring",
> which will mean that the callers don't need to know anything about how
> the "appropriate context" is found (e.g. per-ring vs per-device, etc).
>
> So this patch renames the existing i915_gem_request_alloc(), and makes
> it local (static inline), and replaces it with a wrapper that provides
> a default if the context is NULL, and also has a nicer calling
> convention (doesn't require a pointer to an output parameter). Then we
> change all callers to use the new convention:
> OLD:
> 	err = i915_gem_request_alloc(ring, user_ctx, &req);
> 	if (err) ...
> NEW:
> 	req = i915_gem_request_alloc(ring, user_ctx);
> 	if (IS_ERR(req)) ...
> OLD:
> 	err = i915_gem_request_alloc(ring, ring->default_context, &req);
> 	if (err) ...
> NEW:
> 	req = i915_gem_request_alloc(ring, NULL);
> 	if (IS_ERR(req)) ...
>
> v4:	Rebased

Wasn't following the discussion in detail but there was always big 
resistance towards API which takes NULLs inferring some default state. 
At least in some of my past work that was an repeated objection. Maybe a 
way around it would be to have two functions, like:

i915_gem_request_alloc(ring); // default context
i915_gem_request_alloc_ctx(ring, ctx); // user context

?

Regards,

Tvrtko
Chris Wilson Jan. 20, 2016, 10:20 a.m. UTC | #2
On Wed, Jan 20, 2016 at 09:56:10AM +0000, Tvrtko Ursulin wrote:
> 
> Hi,
> 
> On 19/01/16 19:02, Dave Gordon wrote:
> >There are a number of places where the driver needs a request, but isn't
> >working on behalf of any specific user or in a specific context. At
> >present, we associate them with the per-engine default context. A future
> >patch will abolish those per-engine context pointers; but we can already
> >eliminate a lot of the references to them, just by making the allocator
> >allow NULL as a shorthand for "an appropriate context for this ring",
> >which will mean that the callers don't need to know anything about how
> >the "appropriate context" is found (e.g. per-ring vs per-device, etc).
> >
> >So this patch renames the existing i915_gem_request_alloc(), and makes
> >it local (static inline), and replaces it with a wrapper that provides
> >a default if the context is NULL, and also has a nicer calling
> >convention (doesn't require a pointer to an output parameter). Then we
> >change all callers to use the new convention:
> >OLD:
> >	err = i915_gem_request_alloc(ring, user_ctx, &req);
> >	if (err) ...
> >NEW:
> >	req = i915_gem_request_alloc(ring, user_ctx);
> >	if (IS_ERR(req)) ...
> >OLD:
> >	err = i915_gem_request_alloc(ring, ring->default_context, &req);
> >	if (err) ...
> >NEW:
> >	req = i915_gem_request_alloc(ring, NULL);
> >	if (IS_ERR(req)) ...
> >
> >v4:	Rebased
> 
> Wasn't following the discussion in detail but there was always big
> resistance towards API which takes NULLs inferring some default
> state. At least in some of my past work that was an repeated
> objection. Maybe a way around it would be to have two functions,
> like:
> 
> i915_gem_request_alloc(ring); // default context
> i915_gem_request_alloc_ctx(ring, ctx); // user context

There is only one piece of code where we even want to use a default
kernel context, so it makes no sense from that point of view.
Imho, knowing the context is paramount to achieving context separation
in GEM - which we are far from achieving yet.
-Chris
Tvrtko Ursulin Jan. 22, 2016, 11:12 a.m. UTC | #3
On 19/01/16 19:02, Dave Gordon wrote:
> There are a number of places where the driver needs a request, but isn't
> working on behalf of any specific user or in a specific context. At
> present, we associate them with the per-engine default context. A future
> patch will abolish those per-engine context pointers; but we can already
> eliminate a lot of the references to them, just by making the allocator
> allow NULL as a shorthand for "an appropriate context for this ring",
> which will mean that the callers don't need to know anything about how
> the "appropriate context" is found (e.g. per-ring vs per-device, etc).
> 
> So this patch renames the existing i915_gem_request_alloc(), and makes
> it local (static inline), and replaces it with a wrapper that provides
> a default if the context is NULL, and also has a nicer calling
> convention (doesn't require a pointer to an output parameter). Then we
> change all callers to use the new convention:
> OLD:
> 	err = i915_gem_request_alloc(ring, user_ctx, &req);
> 	if (err) ...
> NEW:
> 	req = i915_gem_request_alloc(ring, user_ctx);
> 	if (IS_ERR(req)) ...
> OLD:
> 	err = i915_gem_request_alloc(ring, ring->default_context, &req);
> 	if (err) ...
> NEW:
> 	req = i915_gem_request_alloc(ring, NULL);
> 	if (IS_ERR(req)) ...
> 
> v4:	Rebased
> 
> Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
> Reviewed-by: Nick Hoath <nicholas.hoath@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_drv.h            |  6 ++--
>   drivers/gpu/drm/i915/i915_gem.c            | 55 +++++++++++++++++++++++-------
>   drivers/gpu/drm/i915/i915_gem_execbuffer.c | 14 +++++---
>   drivers/gpu/drm/i915/intel_display.c       |  6 ++--
>   drivers/gpu/drm/i915/intel_lrc.c           |  9 +++--
>   drivers/gpu/drm/i915/intel_overlay.c       | 24 ++++++-------
>   6 files changed, 74 insertions(+), 40 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index af30148..dfebf9f 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2279,9 +2279,9 @@ struct drm_i915_gem_request {
>   
>   };
>   
> -int i915_gem_request_alloc(struct intel_engine_cs *ring,
> -			   struct intel_context *ctx,
> -			   struct drm_i915_gem_request **req_out);
> +struct drm_i915_gem_request * __must_check
> +i915_gem_request_alloc(struct intel_engine_cs *engine,
> +		       struct intel_context *ctx);
>   void i915_gem_request_cancel(struct drm_i915_gem_request *req);
>   void i915_gem_request_free(struct kref *req_ref);
>   int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 6b0102d..8e716b6 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2690,9 +2690,10 @@ void i915_gem_request_free(struct kref *req_ref)
>   	kmem_cache_free(req->i915->requests, req);
>   }
>   
> -int i915_gem_request_alloc(struct intel_engine_cs *ring,
> -			   struct intel_context *ctx,
> -			   struct drm_i915_gem_request **req_out)
> +static inline int
> +__i915_gem_request_alloc(struct intel_engine_cs *ring,
> +			 struct intel_context *ctx,
> +			 struct drm_i915_gem_request **req_out)
>   {
>   	struct drm_i915_private *dev_priv = to_i915(ring->dev);
>   	struct drm_i915_gem_request *req;
> @@ -2755,6 +2756,31 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
>   	return ret;
>   }
>   
> +/**
> + * i915_gem_request_alloc - allocate a request structure
> + *
> + * @engine: engine that we wish to issue the request on.
> + * @ctx: context that the request will be associated with.
> + *       This can be NULL if the request is not directly related to
> + *       any specific user context, in which case this function will
> + *       choose an appropriate context to use.
> + *
> + * Returns a pointer to the allocated request if successful,
> + * or an error code if not.
> + */
> +struct drm_i915_gem_request *
> +i915_gem_request_alloc(struct intel_engine_cs *engine,
> +		       struct intel_context *ctx)
> +{
> +	struct drm_i915_gem_request *req;
> +	int err;
> +
> +	if (ctx == NULL)
> +		ctx = engine->default_context;
> +	err = __i915_gem_request_alloc(engine, ctx, &req);
> +	return err ? ERR_PTR(err) : req;
> +}
> +
>   void i915_gem_request_cancel(struct drm_i915_gem_request *req)
>   {
>   	intel_ring_reserved_space_cancel(req->ringbuf);
> @@ -3172,9 +3198,13 @@ void i915_gem_reset(struct drm_device *dev)
>   			return 0;
>   
>   		if (*to_req == NULL) {
> -			ret = i915_gem_request_alloc(to, to->default_context, to_req);
> -			if (ret)
> -				return ret;
> +			struct drm_i915_gem_request *req;
> +
> +			req = i915_gem_request_alloc(to, NULL);
> +			if (IS_ERR(req))
> +				return PTR_ERR(req);
> +
> +			*to_req = req;
>   		}
>   
>   		trace_i915_gem_ring_sync_to(*to_req, from, from_req);
> @@ -3374,9 +3404,9 @@ int i915_gpu_idle(struct drm_device *dev)
>   		if (!i915.enable_execlists) {
>   			struct drm_i915_gem_request *req;
>   
> -			ret = i915_gem_request_alloc(ring, ring->default_context, &req);
> -			if (ret)
> -				return ret;
> +			req = i915_gem_request_alloc(ring, NULL);
> +			if (IS_ERR(req))
> +				return PTR_ERR(req);
>   
>   			ret = i915_switch_context(req);
>   			if (ret) {
> @@ -4871,10 +4901,9 @@ int i915_gem_init_rings(struct drm_device *dev)
>   	for_each_ring(ring, dev_priv, i) {
>   		struct drm_i915_gem_request *req;
>   
> -		WARN_ON(!ring->default_context);
> -
> -		ret = i915_gem_request_alloc(ring, ring->default_context, &req);
> -		if (ret) {
> +		req = i915_gem_request_alloc(ring, NULL);
> +		if (IS_ERR(req)) {
> +			ret = PTR_ERR(req);
>   			i915_gem_cleanup_ringbuffer(dev);
>   			goto out;
>   		}
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index 4edf1c0..dc32018 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -1381,6 +1381,7 @@ static int gen8_dispatch_bsd_ring(struct drm_device *dev,
>   		       struct drm_i915_gem_exec_object2 *exec)
>   {
>   	struct drm_i915_private *dev_priv = dev->dev_private;
> +	struct drm_i915_gem_request *req = NULL;
>   	struct eb_vmas *eb;
>   	struct drm_i915_gem_object *batch_obj;
>   	struct drm_i915_gem_exec_object2 shadow_exec_entry;
> @@ -1602,11 +1603,13 @@ static int gen8_dispatch_bsd_ring(struct drm_device *dev,
>   		params->batch_obj_vm_offset = i915_gem_obj_offset(batch_obj, vm);
>   
>   	/* Allocate a request for this batch buffer nice and early. */
> -	ret = i915_gem_request_alloc(ring, ctx, &params->request);
> -	if (ret)
> +	req = i915_gem_request_alloc(ring, ctx);
> +	if (IS_ERR(req)) {
> +		ret = PTR_ERR(req);
>   		goto err_batch_unpin;
> +	}

Jump down..

>   
> -	ret = i915_gem_request_add_to_client(params->request, file);
> +	ret = i915_gem_request_add_to_client(req, file);
>   	if (ret)
>   		goto err_batch_unpin;
>   
> @@ -1622,6 +1625,7 @@ static int gen8_dispatch_bsd_ring(struct drm_device *dev,
>   	params->dispatch_flags          = dispatch_flags;
>   	params->batch_obj               = batch_obj;
>   	params->ctx                     = ctx;
> +	params->request                 = req;
>   
>   	ret = dev_priv->gt.execbuf_submit(params, args, &eb->vmas);
>   
> @@ -1645,8 +1649,8 @@ static int gen8_dispatch_bsd_ring(struct drm_device *dev,
>   	 * must be freed again. If it was submitted then it is being tracked
>   	 * on the active request list and no clean up is required here.
>   	 */
> -	if (ret && params->request)
> -		i915_gem_request_cancel(params->request);
> +	if (ret && req)
> +		i915_gem_request_cancel(req);

.. to here, where req is an ERR_PTR and you call i915_gem_request_cancel
on it and end up with:

[   33.009942] BUG: unable to handle kernel paging request at ffffffffffffffca
[   33.018003] IP: [<ffffffffa01d9428>] i915_gem_request_cancel+0x8/0x60 [i915]
[   33.026191] PGD 2a0b067 PUD 2a0d067 PMD 0 
[   33.031017] Oops: 0000 [#1] PREEMPT SMP 
[   33.035666] Modules linked in: hid_generic usbhid i915 i2c_algo_bit drm_kms_helper asix usbnet libphy cfbfillrect syscopyarea mii cfbimgblt sysfillrect sysimgblt fb_sys_fops cfbcopyarea drm coretemp i2c_hid hid pinctrl_sunrisepoint pinctrl_intel video acpi_pad nls_iso8859_1 e1000e ptp psmouse pps_core ahci libahci
[   33.068007] CPU: 0 PID: 1891 Comm: gem_close_race Tainted: G     U          4.4.0-160121+ #123
[   33.078000] Hardware name: Intel Corporation Skylake Client platform/Skylake AIO DDR3L RVP10, BIOS SKLSE2R1.R00.X100.B01.1509220551 09/22/2015
[   33.092745] task: ffff88016b750000 ti: ffff88016c980000 task.ti: ffff88016c980000
[   33.101497] RIP: 0010:[<ffffffffa01d9428>]  [<ffffffffa01d9428>] i915_gem_request_cancel+0x8/0x60 [i915]
[   33.112549] RSP: 0018:ffff88016c983c10  EFLAGS: 00010202
[   33.118858] RAX: 0000000000000001 RBX: ffffffffffffff92 RCX: 000000018040003e
[   33.127246] RDX: 000000018040003f RSI: ffffea0001f10080 RDI: ffffffffffffff92
[   33.135650] RBP: ffff88016c983c18 R08: ffff88007c402f00 R09: 000000007c402001
[   33.144051] R10: ffff880172c170c0 R11: ffffea0001f10080 R12: ffff88007c402f00
[   33.152474] R13: 00000000ffffff92 R14: ffffffffffffff92 R15: ffff88016b9143d0
[   33.160888] FS:  00007fe0c4b09700(0000) GS:ffff880172c00000(0000) knlGS:0000000000000000
[   33.170398] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   33.177254] CR2: ffffffffffffffca CR3: 0000000086f41000 CR4: 00000000003406f0
[   33.185704] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   33.194155] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   33.202603] Stack:
[   33.205244]  ffff88007c402f00 ffff88016c983d68 ffffffffa01ce10b ffffea0005980cc0
[   33.214067]  ffff88016c983cb8 ffffffffa01d7785 ffff88016c983c88 ffff88008705e218
[   33.222892]  ffff88016c983e10 ffff88007d808000 ffff88016c983df0 ffff88007ff75740
[   33.231717] Call Trace:
[   33.234889]  [<ffffffffa01ce10b>] i915_gem_do_execbuffer.isra.25+0x10cb/0x12a0 [i915]
[   33.244184]  [<ffffffffa01d7785>] ? i915_gem_object_get_pages_gtt+0x225/0x3c0 [i915]
[   33.253389]  [<ffffffffa01ddfb6>] ? i915_gem_pwrite_ioctl+0xd6/0x9f0 [i915]
[   33.261704]  [<ffffffffa01cee68>] i915_gem_execbuffer2+0xa8/0x250 [i915]
[   33.269738]  [<ffffffffa00f65d8>] drm_ioctl+0x258/0x4f0 [drm]
[   33.276691]  [<ffffffff810d3a97>] ? shmem_destroy_inode+0x17/0x20
[   33.284047]  [<ffffffffa01cedc0>] ? i915_gem_execbuffer+0x340/0x340 [i915]
[   33.292293]  [<ffffffff8111921c>] ? __dentry_kill+0x14c/0x1d0
[   33.299275]  [<ffffffff81121587>] ? mntput_no_expire+0x27/0x1a0
[   33.306459]  [<ffffffff8111590d>] do_vfs_ioctl+0x2cd/0x4a0
[   33.313157]  [<ffffffff8111eac2>] ? __fget+0x72/0xb0
[   33.319274]  [<ffffffff81115b1c>] SyS_ioctl+0x3c/0x70
[   33.325496]  [<ffffffff814effd7>] entry_SYSCALL_64_fastpath+0x12/0x6a
[   33.333289] Code: 00 48 c7 c7 d8 69 27 a0 31 c0 e8 d4 08 e7 e0 e9 b8 fe ff ff 66 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 53 48 89 fb <48> 8b 7f 38 e8 5f 2a 01 00 48 8b 43 10 48 8b 40 10 8b 40 60 83 
[   33.356041] RIP  [<ffffffffa01d9428>] i915_gem_request_cancel+0x8/0x60 [i915]
[   33.364700]  RSP <ffff88016c983c10>
[   33.369208] CR2: ffffffffffffffca
[   33.373527] ---[ end trace d38fe9382064e353 ]---

Regards,

Tvrtko
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index af30148..dfebf9f 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2279,9 +2279,9 @@  struct drm_i915_gem_request {
 
 };
 
-int i915_gem_request_alloc(struct intel_engine_cs *ring,
-			   struct intel_context *ctx,
-			   struct drm_i915_gem_request **req_out);
+struct drm_i915_gem_request * __must_check
+i915_gem_request_alloc(struct intel_engine_cs *engine,
+		       struct intel_context *ctx);
 void i915_gem_request_cancel(struct drm_i915_gem_request *req);
 void i915_gem_request_free(struct kref *req_ref);
 int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6b0102d..8e716b6 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2690,9 +2690,10 @@  void i915_gem_request_free(struct kref *req_ref)
 	kmem_cache_free(req->i915->requests, req);
 }
 
-int i915_gem_request_alloc(struct intel_engine_cs *ring,
-			   struct intel_context *ctx,
-			   struct drm_i915_gem_request **req_out)
+static inline int
+__i915_gem_request_alloc(struct intel_engine_cs *ring,
+			 struct intel_context *ctx,
+			 struct drm_i915_gem_request **req_out)
 {
 	struct drm_i915_private *dev_priv = to_i915(ring->dev);
 	struct drm_i915_gem_request *req;
@@ -2755,6 +2756,31 @@  int i915_gem_request_alloc(struct intel_engine_cs *ring,
 	return ret;
 }
 
+/**
+ * i915_gem_request_alloc - allocate a request structure
+ *
+ * @engine: engine that we wish to issue the request on.
+ * @ctx: context that the request will be associated with.
+ *       This can be NULL if the request is not directly related to
+ *       any specific user context, in which case this function will
+ *       choose an appropriate context to use.
+ *
+ * Returns a pointer to the allocated request if successful,
+ * or an error code if not.
+ */
+struct drm_i915_gem_request *
+i915_gem_request_alloc(struct intel_engine_cs *engine,
+		       struct intel_context *ctx)
+{
+	struct drm_i915_gem_request *req;
+	int err;
+
+	if (ctx == NULL)
+		ctx = engine->default_context;
+	err = __i915_gem_request_alloc(engine, ctx, &req);
+	return err ? ERR_PTR(err) : req;
+}
+
 void i915_gem_request_cancel(struct drm_i915_gem_request *req)
 {
 	intel_ring_reserved_space_cancel(req->ringbuf);
@@ -3172,9 +3198,13 @@  void i915_gem_reset(struct drm_device *dev)
 			return 0;
 
 		if (*to_req == NULL) {
-			ret = i915_gem_request_alloc(to, to->default_context, to_req);
-			if (ret)
-				return ret;
+			struct drm_i915_gem_request *req;
+
+			req = i915_gem_request_alloc(to, NULL);
+			if (IS_ERR(req))
+				return PTR_ERR(req);
+
+			*to_req = req;
 		}
 
 		trace_i915_gem_ring_sync_to(*to_req, from, from_req);
@@ -3374,9 +3404,9 @@  int i915_gpu_idle(struct drm_device *dev)
 		if (!i915.enable_execlists) {
 			struct drm_i915_gem_request *req;
 
-			ret = i915_gem_request_alloc(ring, ring->default_context, &req);
-			if (ret)
-				return ret;
+			req = i915_gem_request_alloc(ring, NULL);
+			if (IS_ERR(req))
+				return PTR_ERR(req);
 
 			ret = i915_switch_context(req);
 			if (ret) {
@@ -4871,10 +4901,9 @@  int i915_gem_init_rings(struct drm_device *dev)
 	for_each_ring(ring, dev_priv, i) {
 		struct drm_i915_gem_request *req;
 
-		WARN_ON(!ring->default_context);
-
-		ret = i915_gem_request_alloc(ring, ring->default_context, &req);
-		if (ret) {
+		req = i915_gem_request_alloc(ring, NULL);
+		if (IS_ERR(req)) {
+			ret = PTR_ERR(req);
 			i915_gem_cleanup_ringbuffer(dev);
 			goto out;
 		}
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 4edf1c0..dc32018 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1381,6 +1381,7 @@  static int gen8_dispatch_bsd_ring(struct drm_device *dev,
 		       struct drm_i915_gem_exec_object2 *exec)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct drm_i915_gem_request *req = NULL;
 	struct eb_vmas *eb;
 	struct drm_i915_gem_object *batch_obj;
 	struct drm_i915_gem_exec_object2 shadow_exec_entry;
@@ -1602,11 +1603,13 @@  static int gen8_dispatch_bsd_ring(struct drm_device *dev,
 		params->batch_obj_vm_offset = i915_gem_obj_offset(batch_obj, vm);
 
 	/* Allocate a request for this batch buffer nice and early. */
-	ret = i915_gem_request_alloc(ring, ctx, &params->request);
-	if (ret)
+	req = i915_gem_request_alloc(ring, ctx);
+	if (IS_ERR(req)) {
+		ret = PTR_ERR(req);
 		goto err_batch_unpin;
+	}
 
-	ret = i915_gem_request_add_to_client(params->request, file);
+	ret = i915_gem_request_add_to_client(req, file);
 	if (ret)
 		goto err_batch_unpin;
 
@@ -1622,6 +1625,7 @@  static int gen8_dispatch_bsd_ring(struct drm_device *dev,
 	params->dispatch_flags          = dispatch_flags;
 	params->batch_obj               = batch_obj;
 	params->ctx                     = ctx;
+	params->request                 = req;
 
 	ret = dev_priv->gt.execbuf_submit(params, args, &eb->vmas);
 
@@ -1645,8 +1649,8 @@  static int gen8_dispatch_bsd_ring(struct drm_device *dev,
 	 * must be freed again. If it was submitted then it is being tracked
 	 * on the active request list and no clean up is required here.
 	 */
-	if (ret && params->request)
-		i915_gem_request_cancel(params->request);
+	if (ret && req)
+		i915_gem_request_cancel(req);
 
 	mutex_unlock(&dev->struct_mutex);
 
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index a851cb7..991fda8 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11709,9 +11709,11 @@  static int intel_crtc_page_flip(struct drm_crtc *crtc,
 					obj->last_write_req);
 	} else {
 		if (!request) {
-			ret = i915_gem_request_alloc(ring, ring->default_context, &request);
-			if (ret)
+			request = i915_gem_request_alloc(ring, NULL);
+			if (IS_ERR(request)) {
+				ret = PTR_ERR(request);
 				goto cleanup_unpin;
+			}
 		}
 
 		ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request,
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index faaf490..ec2482da 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2520,11 +2520,10 @@  int intel_lr_context_deferred_alloc(struct intel_context *ctx,
 	if (ctx != ring->default_context && ring->init_context) {
 		struct drm_i915_gem_request *req;
 
-		ret = i915_gem_request_alloc(ring,
-			ctx, &req);
-		if (ret) {
-			DRM_ERROR("ring create req: %d\n",
-				ret);
+		req = i915_gem_request_alloc(ring, ctx);
+		if (IS_ERR(req)) {
+			ret = PTR_ERR(req);
+			DRM_ERROR("ring create req: %d\n", ret);
 			goto error_ringbuf;
 		}
 
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index 76f1980..9168413 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -240,9 +240,9 @@  static int intel_overlay_on(struct intel_overlay *overlay)
 	WARN_ON(overlay->active);
 	WARN_ON(IS_I830(dev) && !(dev_priv->quirks & QUIRK_PIPEA_FORCE));
 
-	ret = i915_gem_request_alloc(ring, ring->default_context, &req);
-	if (ret)
-		return ret;
+	req = i915_gem_request_alloc(ring, NULL);
+	if (IS_ERR(req))
+		return PTR_ERR(req);
 
 	ret = intel_ring_begin(req, 4);
 	if (ret) {
@@ -283,9 +283,9 @@  static int intel_overlay_continue(struct intel_overlay *overlay,
 	if (tmp & (1 << 17))
 		DRM_DEBUG("overlay underrun, DOVSTA: %x\n", tmp);
 
-	ret = i915_gem_request_alloc(ring, ring->default_context, &req);
-	if (ret)
-		return ret;
+	req = i915_gem_request_alloc(ring, NULL);
+	if (IS_ERR(req))
+		return PTR_ERR(req);
 
 	ret = intel_ring_begin(req, 2);
 	if (ret) {
@@ -349,9 +349,9 @@  static int intel_overlay_off(struct intel_overlay *overlay)
 	 * of the hw. Do it in both cases */
 	flip_addr |= OFC_UPDATE;
 
-	ret = i915_gem_request_alloc(ring, ring->default_context, &req);
-	if (ret)
-		return ret;
+	req = i915_gem_request_alloc(ring, NULL);
+	if (IS_ERR(req))
+		return PTR_ERR(req);
 
 	ret = intel_ring_begin(req, 6);
 	if (ret) {
@@ -423,9 +423,9 @@  static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
 		/* synchronous slowpath */
 		struct drm_i915_gem_request *req;
 
-		ret = i915_gem_request_alloc(ring, ring->default_context, &req);
-		if (ret)
-			return ret;
+		req = i915_gem_request_alloc(ring, NULL);
+		if (IS_ERR(req))
+			return PTR_ERR(req);
 
 		ret = intel_ring_begin(req, 2);
 		if (ret) {