diff mbox

[11/15] drm/i915: Implementation of GuC client

Message ID 1434393394-21002-12-git-send-email-david.s.gordon@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Dave Gordon June 15, 2015, 6:36 p.m. UTC
From: Alex Dai <yu.dai@intel.com>

A GuC client has its own doorbell and workqueue. It maintains the
doorbell cache line, process description object and work queue item.

A default guc_client is created for the i915 driver to use for
normal-priority in-order submission.

Issue: VIZ-4884
Signed-off-by: Alex Dai <yu.dai@intel.com>
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
---
 drivers/gpu/drm/i915/i915_guc_submission.c |  668 ++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/intel_guc.h           |    5 +
 drivers/gpu/drm/i915/intel_guc_loader.c    |   10 +
 3 files changed, 683 insertions(+)

Comments

Chris Wilson June 15, 2015, 9:55 p.m. UTC | #1
On Mon, Jun 15, 2015 at 07:36:29PM +0100, Dave Gordon wrote:
> +/* Get valid workqueue item and return it back to offset */
> +static int guc_get_workqueue_space(struct i915_guc_client *gc, u32 *offset)
> +{
> +	struct guc_process_desc *desc;
> +	void *base;
> +	u32 size = sizeof(struct guc_wq_item);
> +	int ret = 0, timeout_counter = 200;
> +	unsigned long flags;
> +
> +	base = kmap_atomic(i915_gem_object_get_page(gc->client_obj, 0));
> +	desc = base + gc->proc_desc_offset;
> +
> +	while (timeout_counter-- > 0) {
> +		spin_lock_irqsave(&gc->wq_lock, flags);
> +
> +		ret = wait_for_atomic(CIRC_SPACE(gc->wq_tail, desc->head,
> +				gc->wq_size) >= size, 1);

What is the point of this loop? Drop the spinlock 200 times? You already
have a timeout, the loop extends that by a factor or 200. You merely
allow gazzumping, however I haven't looked at the locking to see what
you intend to lock (since it is not described at all).
-Chris
Dave Gordon June 19, 2015, 5:55 p.m. UTC | #2
On 15/06/15 22:55, Chris Wilson wrote:
> On Mon, Jun 15, 2015 at 07:36:29PM +0100, Dave Gordon wrote:
>> +/* Get valid workqueue item and return it back to offset */
>> +static int guc_get_workqueue_space(struct i915_guc_client *gc, u32 *offset)
>> +{
>> +	struct guc_process_desc *desc;
>> +	void *base;
>> +	u32 size = sizeof(struct guc_wq_item);
>> +	int ret = 0, timeout_counter = 200;
>> +	unsigned long flags;
>> +
>> +	base = kmap_atomic(i915_gem_object_get_page(gc->client_obj, 0));
>> +	desc = base + gc->proc_desc_offset;
>> +
>> +	while (timeout_counter-- > 0) {
>> +		spin_lock_irqsave(&gc->wq_lock, flags);
>> +
>> +		ret = wait_for_atomic(CIRC_SPACE(gc->wq_tail, desc->head,
>> +				gc->wq_size) >= size, 1);
> 
> What is the point of this loop? Drop the spinlock 200 times? You already
> have a timeout, the loop extends that by a factor or 200. You merely
> allow gazzumping, however I haven't looked at the locking to see what
> you intend to lock (since it is not described at all).
> -Chris

Hmmm .. I didn't write this code, so I'm guessing somewhat; but ...

A 'wq_lock' must lock a 'wq', which from the name of the function is a
workqueue, which is a circular buffer shared between the host and the
GuC, where (like the main ringbuffers) the host (producer) advances the
tail (gc->wq_tail) and the other partner (consumer, in this case the
GuC) advances the head (desc->head).

Presumably the GuC could take many (up to 200) ms to get round to making
space available, in a worst-case scenario where it's busy servicing
interrupts and doing other things.

Now we certainly don't want to spin for up to 200ms with interrupts
disabled, so

    spin_lock_irqsave(&gc->wq_lock, flags);
    ret = wait_for_atomic(CIRC_SPACE(gc->wq_tail, desc->head,
                                     gc->wq_size) >= size, *200*);
    spin_unlock_irqrestore(&gc->wq_lock, flags);

would be a bad idea. OTOH I don't think there's any other lock held by
anyone higher up in the callchain, so we /probably do/ need the spinlock
to protect the updating of wq_tail when the wait_for_atomic succeeds.

So yes, I think up-to-200 iterations of polling for freespace for up to
1ms each time is not too unreasonable, given that apparently we have to
poll, at least for now (once the scheduler lands, we will always be able
to predict how much space is available and avoid trying to launch
batches when there isn't enough).

.Dave.
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 4efb73a..487f295 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -27,6 +27,536 @@ 
 #include "intel_guc.h"
 
 /**
+ * DOC: GuC Client
+ *
+ * i915_guc_client:
+ * We use the term client to avoid confusion with contexts. A i915_guc_client is
+ * equivalent to GuC object guc_context_desc. This context descriptor is
+ * allocated from a pool of 1024 entries. Kernel driver will allocate doorbell
+ * and workqueue for it. Also the process descriptor (guc_process_desc), which
+ * is mapped to client space. So the client can write Work Item then ring the
+ * doorbell.
+ *
+ * To simplify the implementation, we allocate one gem object that contains all
+ * pages for doorbell, process descriptor and workqueue.
+ *
+ * The Scratch registers:
+ * There are 16 MMIO-based registers start from 0xC180. The kernel driver writes
+ * a value to the action register (SOFT_SCRATCH_0) along with any data. It then
+ * triggers an interrupt on the GuC via another register write (0xC4C8).
+ * Firmware writes a success/fail code back to the action register after
+ * processes the request. The kernel driver polls waiting for this update and
+ * then proceeds.
+ * See host2guc_action()
+ *
+ * Doorbells:
+ * Doorbells are interrupts to uKernel. A doorbell is a single cache line (QW)
+ * mapped into process space.
+ *
+ * Work Items:
+ * There are several types of work items that the host may place into a
+ * workqueue, each with its own requirements and limitations. Currently only
+ * WQ_TYPE_INORDER is needed to support legacy submission via GuC, which
+ * represents in-order queue. The kernel driver packs ring tail pointer and an
+ * ELSP context descriptor dword into Work Item.
+ * See guc_add_workqueue_item()
+ *
+ */
+
+/*
+ * Read GuC command/status register (SOFT_SCRATCH_0)
+ * Return true if it contains a response rather than a command
+ */
+static inline bool host2guc_get_response(struct drm_i915_private *dev_priv,
+					 u32 *status)
+{
+	u32 val = I915_READ(SOFT_SCRATCH(0));
+	*status = val;
+	return GUC2HOST_IS_RESPONSE(val);
+}
+
+static int host2guc_action(struct intel_guc *guc, u32 *data, u32 len)
+{
+	struct drm_i915_private *dev_priv = guc_to_i915(guc);
+	u32 status;
+	int i;
+	int ret;
+
+	if (WARN_ON(len < 1 || len > 15))
+		return -EINVAL;
+
+	spin_lock(&dev_priv->guc.host2guc_lock);
+
+	dev_priv->guc.action_count += 1;
+	dev_priv->guc.action_cmd = data[0];
+
+	for (i = 0; i < len; i++)
+		I915_WRITE(SOFT_SCRATCH(i), data[i]);
+
+	POSTING_READ(SOFT_SCRATCH(i - 1));
+
+	I915_WRITE(HOST2GUC_INTERRUPT, HOST2GUC_TRIGGER);
+
+	ret = wait_for_atomic(host2guc_get_response(dev_priv, &status), 10);
+	if (status != GUC2HOST_STATUS_SUCCESS) {
+		/* either GuC doesn't response, which is a TIMEOUT,
+		 * or a failure code is returned. */
+		if (ret != -ETIMEDOUT)
+			ret = -EIO;
+
+		DRM_ERROR("GUC: host2guc action 0x%X failed. ret=%d "
+				"status=0x%08X response=0x%08X\n",
+				data[0], ret, status,
+				I915_READ(SOFT_SCRATCH(15)));
+
+		dev_priv->guc.action_fail += 1;
+		dev_priv->guc.action_err = ret;
+	}
+	dev_priv->guc.action_status = status;
+
+	spin_unlock(&dev_priv->guc.host2guc_lock);
+
+	return ret;
+}
+
+/*
+ * Tell the GuC to allocate or deallocate a specific doorbell
+ */
+
+static int host2guc_allocate_doorbell(struct intel_guc *guc,
+				      struct i915_guc_client *client)
+{
+	u32 data[2];
+
+	data[0] = HOST2GUC_ACTION_ALLOCATE_DOORBELL;
+	data[1] = client->ctx_index;
+
+	return host2guc_action(guc, data, 2);
+}
+
+static int host2guc_release_doorbell(struct intel_guc *guc,
+				     struct i915_guc_client *client)
+{
+	u32 data[2];
+
+	data[0] = HOST2GUC_ACTION_DEALLOCATE_DOORBELL;
+	data[1] = client->ctx_index;
+
+	return host2guc_action(guc, data, 2);
+}
+
+/*
+ * Initialise, update, or clear doorbell data shared with the GuC
+ *
+ * These functions modify shared data and so need access to the mapped
+ * client object which contains the page being used for the doorbell
+ */
+
+static void guc_init_doorbell(struct intel_guc *guc,
+			      struct i915_guc_client *client)
+{
+	struct guc_doorbell_info *doorbell;
+	void *base;
+
+	base = kmap_atomic(i915_gem_object_get_page(client->client_obj, 0));
+	doorbell = base + client->doorbell_offset;
+
+	doorbell->db_status = 1;
+	doorbell->cookie = 0;
+
+	kunmap_atomic(base);
+}
+
+static int guc_ring_doorbell(struct i915_guc_client *gc)
+{
+	struct guc_process_desc *desc;
+	union guc_doorbell_qw db_cmp, db_exc, db_ret;
+	union guc_doorbell_qw *db;
+	void *base;
+	int attempt = 2, ret = -EAGAIN;
+
+	base = kmap_atomic(i915_gem_object_get_page(gc->client_obj, 0));
+	desc = base + gc->proc_desc_offset;
+
+	/* Update the tail so it is visible to GuC */
+	desc->tail = gc->wq_tail;
+
+	/* current cookie */
+	db_cmp.db_status = GUC_DOORBELL_ENABLED;
+	db_cmp.cookie = gc->cookie;
+
+	/* cookie to be updated */
+	db_exc.db_status = GUC_DOORBELL_ENABLED;
+	db_exc.cookie = gc->cookie + 1;
+	if (db_exc.cookie == 0)
+		db_exc.cookie = 1;
+
+	/* pointer of current doorbell cacheline */
+	db = base + gc->doorbell_offset;
+
+	while (attempt--) {
+		/* lets ring the doorbell */
+		db_ret.value_qw = atomic64_cmpxchg((atomic64_t *)db,
+			db_cmp.value_qw, db_exc.value_qw);
+
+		/* if the exchange was successfully executed */
+		if (db_ret.value_qw == db_cmp.value_qw) {
+			/* db was successfully rung */
+			gc->cookie = db_exc.cookie;
+			ret = 0;
+			break;
+		}
+
+		/* XXX: doorbell was lost and need to acquire it again */
+		if (db_ret.db_status == GUC_DOORBELL_DISABLED)
+			break;
+
+		DRM_ERROR("Cookie mismatch. Expected %d, returned %d\n",
+			  db_cmp.cookie, db_ret.cookie);
+
+		/* update the cookie to newly read cookie from GuC */
+		db_cmp.cookie = db_ret.cookie;
+		db_exc.cookie = db_ret.cookie + 1;
+		if (db_exc.cookie == 0)
+			db_exc.cookie = 1;
+	}
+
+	kunmap_atomic(base);
+	return ret;
+}
+
+static void guc_disable_doorbell(struct intel_guc *guc,
+				 struct i915_guc_client *client)
+{
+	struct drm_i915_private *dev_priv = guc_to_i915(guc);
+	struct guc_doorbell_info *doorbell;
+	void *base;
+	int drbreg = GEN8_DRBREGL(client->doorbell_id);
+	int value;
+
+	base = kmap_atomic(i915_gem_object_get_page(client->client_obj, 0));
+	doorbell = base + client->doorbell_offset;
+
+	doorbell->db_status = 0;
+
+	kunmap_atomic(base);
+
+	I915_WRITE(drbreg, I915_READ(drbreg) & ~GEN8_DRB_VALID);
+
+	value = I915_READ(drbreg);
+	WARN_ON((value & GEN8_DRB_VALID) != 0);
+
+	I915_WRITE(GEN8_DRBREGU(client->doorbell_id), 0);
+	I915_WRITE(drbreg, 0);
+
+	/* XXX: wait for any interrupts */
+	/* XXX: wait for workqueue to drain */
+}
+
+/*
+ * Select, assign and relase doorbell cachelines
+ *
+ * These functions track which doorbell cachelines are in use.
+ * The data they manipulate is protected by the host2guc lock.
+ */
+
+static off_t select_doorbell_cacheline(struct intel_guc *guc)
+{
+	const int cacheline_size = boot_cpu_data.x86_clflush_size;
+	const int cacheline_per_page = PAGE_SIZE / cacheline_size;
+	off_t offset;
+
+	spin_lock(&guc->host2guc_lock);
+
+	/* Doorbell uses single cache line */
+	offset = cacheline_size * guc->db_cacheline;
+
+	/* Moving to next cache line to reduce contention */
+	guc->db_cacheline = (guc->db_cacheline + 1) % cacheline_per_page;
+
+	spin_unlock(&guc->host2guc_lock);
+
+	return offset;
+}
+
+static uint16_t assign_doorbell(struct intel_guc *guc, u32 priority)
+{
+	const uint16_t size = I915_MAX_DOORBELLS;
+	uint16_t id;
+
+	spin_lock(&guc->host2guc_lock);
+
+	/* The bitmap is split into two halves - high and normal priority. */
+	if (priority <= GUC_CTX_PRIORITY_HIGH) {
+		id = find_next_zero_bit(guc->doorbell_bitmap, size, size / 2);
+		if (id == size)
+			id = INVALID_DOORBELL_ID;
+	} else {
+		id = find_next_zero_bit(guc->doorbell_bitmap, size / 2, 0);
+		if (id == size / 2)
+			id = INVALID_DOORBELL_ID;
+	}
+
+	if (id != INVALID_DOORBELL_ID)
+		bitmap_set(guc->doorbell_bitmap, id, 1);
+
+	spin_unlock(&guc->host2guc_lock);
+
+	return id;
+}
+
+static void release_doorbell(struct intel_guc *guc, uint16_t id)
+{
+	spin_lock(&guc->host2guc_lock);
+	bitmap_clear(guc->doorbell_bitmap, id, 1);
+	spin_unlock(&guc->host2guc_lock);
+}
+
+/*
+ * Initialise the process descriptor shared with the GuC firmware.
+ */
+static void guc_init_proc_desc(struct intel_guc *guc,
+			       struct i915_guc_client *client)
+{
+	struct guc_process_desc *desc;
+	void *base;
+
+	base = kmap_atomic(i915_gem_object_get_page(client->client_obj, 0));
+	desc = base + client->proc_desc_offset;
+
+	memset(desc, 0, sizeof(*desc));
+
+	/*
+	 * XXX: pDoorbell and WQVBaseAddress are pointers in process address
+	 * space for ring3 clients (set them as in mmap_ioctl) or kernel
+	 * space for kernel clients (map on demand instead? May make debug
+	 * easier to have it mapped).
+	 */
+	desc->wq_base_addr = 0;
+	desc->db_base_addr = 0;
+
+	desc->context_id = client->ctx_index;
+	desc->wq_size_bytes = client->wq_size;
+	desc->wq_status = WQ_STATUS_ACTIVE;
+	desc->priority = client->priority;
+
+	kunmap_atomic(base);
+}
+
+/*
+ * Initialise/clear the context descriptor shared with the GuC firmware.
+ *
+ * This descriptor tells the GuC where (in GGTT space) to find the important
+ * data structures relating to this client (doorbell, process descriptor,
+ * write queue, etc).
+ */
+
+static void guc_init_ctx_desc(struct intel_guc *guc,
+			      struct i915_guc_client *client)
+{
+	struct guc_context_desc desc;
+	struct sg_table *sg;
+
+	memset(&desc, 0, sizeof(desc));
+
+	desc.attribute = GUC_CTX_DESC_ATTR_ACTIVE | GUC_CTX_DESC_ATTR_KERNEL;
+	desc.context_id = client->ctx_index;
+	desc.priority = client->priority;
+	desc.engines_used = (1 << RCS) | (1 << VCS) | (1 << BCS) |
+			    (1 << VECS) | (1 << VCS2); /* all engines */
+	desc.db_id = client->doorbell_id;
+
+	/*
+	 * The CPU address is only needed at certain points, so kmap_atomic on
+	 * demand instead of storing it in the ctx descriptor.
+	 * XXX: May make debug easier to have it mapped
+	 */
+	desc.db_trigger_cpu = 0;
+	desc.db_trigger_uk = client->doorbell_offset +
+		i915_gem_obj_ggtt_offset(client->client_obj);
+	desc.db_trigger_phy = client->doorbell_offset +
+		sg_dma_address(client->client_obj->pages->sgl);
+
+	desc.process_desc = client->proc_desc_offset +
+		i915_gem_obj_ggtt_offset(client->client_obj);
+
+	desc.wq_addr = client->wq_offset +
+		i915_gem_obj_ggtt_offset(client->client_obj);
+
+	desc.wq_size = client->wq_size;
+
+	/*
+	 * XXX: Take LRCs from an existing intel_context if this is not an
+	 * IsKMDCreatedContext client
+	 */
+	desc.desc_private = (uintptr_t)client;
+
+	/* Pool context is pinned already */
+	sg = guc->ctx_pool_obj->pages;
+	sg_pcopy_from_buffer(sg->sgl, sg->nents, &desc, sizeof(desc),
+			     sizeof(desc) * client->ctx_index);
+}
+
+static void guc_fini_ctx_desc(struct intel_guc *guc,
+			      struct i915_guc_client *client)
+{
+	struct guc_context_desc desc;
+	struct sg_table *sg;
+
+	memset(&desc, 0, sizeof(desc));
+
+	sg = guc->ctx_pool_obj->pages;
+	sg_pcopy_from_buffer(sg->sgl, sg->nents, &desc, sizeof(desc),
+			     sizeof(desc) * client->ctx_index);
+}
+
+/* Get valid workqueue item and return it back to offset */
+static int guc_get_workqueue_space(struct i915_guc_client *gc, u32 *offset)
+{
+	struct guc_process_desc *desc;
+	void *base;
+	u32 size = sizeof(struct guc_wq_item);
+	int ret = 0, timeout_counter = 200;
+	unsigned long flags;
+
+	base = kmap_atomic(i915_gem_object_get_page(gc->client_obj, 0));
+	desc = base + gc->proc_desc_offset;
+
+	while (timeout_counter-- > 0) {
+		spin_lock_irqsave(&gc->wq_lock, flags);
+
+		ret = wait_for_atomic(CIRC_SPACE(gc->wq_tail, desc->head,
+				gc->wq_size) >= size, 1);
+
+		if (!ret) {
+			*offset = gc->wq_tail;
+
+			/* advance the tail for next workqueue item */
+			gc->wq_tail += size;
+			gc->wq_tail &= gc->wq_size - 1;
+
+			/* this will break the loop */
+			timeout_counter = 0;
+		}
+
+		spin_unlock_irqrestore(&gc->wq_lock, flags);
+	};
+
+	kunmap_atomic(base);
+
+	return ret;
+}
+
+static int guc_add_workqueue_item(struct i915_guc_client *gc,
+				  struct intel_context *ctx,
+				  struct intel_engine_cs *ring)
+{
+	struct drm_i915_gem_object *ctx_obj = ctx->engine[ring->id].state;
+	struct intel_ringbuffer *ringbuf = ctx->engine[ring->id].ringbuf;
+	struct guc_wq_item *wqi;
+	void *base;
+	u32 tail, wq_len, wq_off = 0;
+	int ret;
+
+	ret = guc_get_workqueue_space(gc, &wq_off);
+	if (ret)
+		return ret;
+
+	/* For now workqueue item is 4 DWs; workqueue buffer is 2 pages. So we
+	 * should not have the case where structure wqi is across page, neither
+	 * wrapped to the beginning. This simplifies the implementation below.
+	 *
+	 * XXX: if not the case, we need save data to a temp wqi and copy it to
+	 * workqueue buffer dw by dw.
+	 */
+	WARN_ON(sizeof(struct guc_wq_item) != 16);
+	WARN_ON(wq_off & 3);
+
+	/* wq starts from the page after doorbell / process_desc */
+	base = kmap_atomic(i915_gem_object_get_page(gc->client_obj,
+			(wq_off + GUC_DB_SIZE) >> PAGE_SHIFT));
+	wq_off &= PAGE_SIZE - 1;
+	wqi = (struct guc_wq_item *)((char *)base + wq_off);
+
+	/* len does not include the header */
+	wq_len = sizeof(struct guc_wq_item) / sizeof(u32) - 1;
+	wqi->header = WQ_TYPE_INORDER |
+			(wq_len << WQ_LEN_SHIFT) |
+			(ring->id << WQ_TARGET_SHIFT) |
+			WQ_NO_WCFLUSH_WAIT;
+
+	wqi->context_desc = (u32)execlists_ctx_descriptor(ring, ctx_obj);
+
+	/* The GuC firmware wants the tail index in qw */
+	tail = ringbuf->tail >> 3;
+	wqi->ring_tail = tail << WQ_RING_TAIL_SHIFT;
+	wqi->fence_id = 0; /*XXX: what fence to be here */
+
+	kunmap_atomic(base);
+
+	return 0;
+}
+
+static void update_context_image(struct intel_context *ctx,
+				 struct intel_engine_cs *ring)
+{
+	struct intel_ringbuffer *ringbuf = ctx->engine[ring->id].ringbuf;
+	unsigned long ringaddr = i915_gem_obj_ggtt_offset(ringbuf->obj);
+	struct drm_i915_gem_object *ctx_obj = ctx->engine[ring->id].state;
+	struct page *page = i915_gem_object_get_page(ctx_obj, LRC_STATE_PN);
+	uint32_t *reg_state;
+
+	reg_state = kmap_atomic(page);
+	reg_state[CTX_RING_BUFFER_START+1] = ringaddr;
+	kunmap_atomic(reg_state);
+}
+
+/**
+ * i915_guc_submit() - Submit commands through GuC
+ * @client:	the guc client where commands will go through
+ * @ctx:	LRC where commands come from
+ * @ring:	HW engine that will excute the commands
+ *
+ * Return:	0 if succeed
+ */
+int i915_guc_submit(struct i915_guc_client *client,
+		    struct intel_context *ctx,
+		    struct intel_engine_cs *ring)
+{
+	int q_ret, b_ret;
+	unsigned long flags;
+
+	/* Need this because of the deferred pin ctx and ring */
+	/* Shall we move this right after ring is pinned? */
+	update_context_image(ctx, ring);
+
+	q_ret = guc_add_workqueue_item(client, ctx, ring);
+	if (q_ret == 0)
+		b_ret = guc_ring_doorbell(client);
+
+	spin_lock_irqsave(&client->wq_lock, flags);
+	client->submissions += 1;
+	if (q_ret) {
+		client->q_fail += 1;
+		client->retcode = q_ret;
+	} else if (b_ret) {
+		client->b_fail += 1;
+		client->retcode = q_ret = b_ret;
+	} else {
+		client->retcode = 0;
+	}
+	spin_unlock_irqrestore(&client->wq_lock, flags);
+
+	return q_ret;
+}
+
+/*
+ * Everything below here is concerned with setup & teardown, and is
+ * therefore not part of the somewhat time-critical batch-submission
+ * path of i915_guc_submit() above.
+ */
+
+/**
  * gem_allocate_guc_obj() - Allocate gem object for GuC usage
  * @dev:	drm device
  * @size:	size of object
@@ -75,6 +605,118 @@  static void gem_release_guc_obj(struct drm_i915_gem_object *obj)
 	drm_gem_object_unreference(&obj->base);
 }
 
+static void guc_client_free(struct drm_device *dev,
+			    struct i915_guc_client *client)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct intel_guc *guc = &dev_priv->guc;
+
+	if (!client)
+		return;
+
+	if (client->doorbell_id != INVALID_DOORBELL_ID) {
+		/*
+		 * First disable the doorbell, then tell the GuC we've
+		 * finished with it, finally deallocate it in our bitmap
+		 */
+		guc_disable_doorbell(guc, client);
+		host2guc_release_doorbell(guc, client);
+		release_doorbell(guc, client->doorbell_id);
+	}
+
+	/*
+	 * XXX: wait for any outstanding submissions before freeing memory.
+	 * Be sure to drop any locks
+	 */
+
+	gem_release_guc_obj(client->client_obj);
+
+	if (client->ctx_index != INVALID_CTX_ID) {
+		guc_fini_ctx_desc(guc, client);
+		ida_simple_remove(&guc->ctx_ids, client->ctx_index);
+	}
+
+	kfree(client);
+}
+
+/**
+ * guc_client_alloc() - Allocate an i915_guc_client
+ * @dev:	drm device
+ * @priority:	four levels priority _CRITICAL, _HIGH, _NORMAL and _LOW
+ * 		The kernel client to replace ExecList submission is created with
+ * 		NORMAL priority. Priority of a client for scheduler can be HIGH,
+ * 		while a preemption context can use CRITICAL.
+ *
+ * Return:	An i915_guc_client object if success.
+ */
+static struct i915_guc_client *guc_client_alloc(struct drm_device *dev,
+						u32 priority)
+{
+	struct i915_guc_client *client;
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct intel_guc *guc = &dev_priv->guc;
+	struct drm_i915_gem_object *obj;
+
+	client = kzalloc(sizeof(*client), GFP_KERNEL);
+	if (!client)
+		return NULL;
+
+	client->doorbell_id = INVALID_DOORBELL_ID;
+	client->priority = priority;
+
+	client->ctx_index = (uint32_t)ida_simple_get(&guc->ctx_ids, 0,
+			MAX_GUC_GPU_CONTEXTS, GFP_KERNEL);
+	if (client->ctx_index >= MAX_GUC_GPU_CONTEXTS) {
+		client->ctx_index = INVALID_CTX_ID;
+		goto err;
+	}
+
+	/* The first page is doorbell/proc_desc. Two followed pages are wq. */
+	obj = gem_allocate_guc_obj(dev, GUC_DB_SIZE + GUC_WQ_SIZE);
+	if (!obj)
+		goto err;
+
+	client->client_obj = obj;
+	client->wq_offset = GUC_DB_SIZE;
+	client->wq_size = GUC_WQ_SIZE;
+	spin_lock_init(&client->wq_lock);
+
+	client->doorbell_offset = select_doorbell_cacheline(guc);
+
+	/*
+	 * Since the doorbell only requires a single cacheline, we can save
+	 * space by putting the application process descriptor in the same
+	 * page. Use the half of the page that doesn't include the doorbell.
+	 */
+	if (client->doorbell_offset >= (GUC_DB_SIZE / 2))
+		client->proc_desc_offset = 0;
+	else
+		client->proc_desc_offset = (GUC_DB_SIZE / 2);
+
+	client->doorbell_id = assign_doorbell(guc, client->priority);
+	if (client->doorbell_id == INVALID_DOORBELL_ID)
+		/* XXX: evict a doorbell instead */
+		goto err;
+
+	guc_init_proc_desc(guc, client);
+	guc_init_ctx_desc(guc, client);
+	guc_init_doorbell(guc, client);
+
+	/* Invalidate GuC TLB to let GuC take the latest updates to GTT. */
+	I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE);
+
+	/* XXX: Any cache flushes needed? General domain mgmt calls? */
+
+	if (host2guc_allocate_doorbell(guc, client))
+		goto err;
+
+	return client;
+
+err:
+	guc_client_free(dev, client);
+	return NULL;
+}
+
 static void guc_create_log(struct intel_guc *guc)
 {
 	struct drm_i915_private *dev_priv = guc_to_i915(guc);
@@ -150,6 +792,32 @@  int i915_guc_submission_init(struct drm_device *dev)
 	return 0;
 }
 
+int i915_guc_submission_enable(struct drm_device *dev)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct intel_guc *guc = &dev_priv->guc;
+	struct i915_guc_client *client;
+
+	/* client for execbuf submission */
+	client = guc_client_alloc(dev, GUC_CTX_PRIORITY_NORMAL);
+	if (!client) {
+		DRM_ERROR("Failed to create execbuf guc_client\n");
+		return -ENOMEM;
+	}
+
+	guc->execbuf_client = client;
+	return 0;
+}
+
+void i915_guc_submission_disable(struct drm_device *dev)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct intel_guc *guc = &dev_priv->guc;
+
+	guc_client_free(dev, guc->execbuf_client);
+	guc->execbuf_client = NULL;
+}
+
 void i915_guc_submission_fini(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
diff --git a/drivers/gpu/drm/i915/intel_guc.h b/drivers/gpu/drm/i915/intel_guc.h
index 06b68c2..147d288 100644
--- a/drivers/gpu/drm/i915/intel_guc.h
+++ b/drivers/gpu/drm/i915/intel_guc.h
@@ -173,6 +173,11 @@  extern void intel_guc_ucode_fini(struct drm_device *dev);
 
 /* i915_guc_submission.c */
 int i915_guc_submission_init(struct drm_device *dev);
+int i915_guc_submission_enable(struct drm_device *dev);
+int i915_guc_submit(struct i915_guc_client *client,
+		    struct intel_context *ctx,
+		    struct intel_engine_cs *ring);
+void i915_guc_submission_disable(struct drm_device *dev);
 void i915_guc_submission_fini(struct drm_device *dev);
 
 #endif
diff --git a/drivers/gpu/drm/i915/intel_guc_loader.c b/drivers/gpu/drm/i915/intel_guc_loader.c
index 0f74876..6e4667d 100644
--- a/drivers/gpu/drm/i915/intel_guc_loader.c
+++ b/drivers/gpu/drm/i915/intel_guc_loader.c
@@ -390,6 +390,8 @@  int intel_guc_ucode_load(struct drm_device *dev, bool wait)
 	if (guc_fw->uc_fw_fetch_status == INTEL_UC_FIRMWARE_PENDING && !wait)
 		return -EAGAIN;
 
+	i915_guc_submission_disable(dev);
+
 	if (guc_fw->uc_fw_fetch_status == INTEL_UC_FIRMWARE_NONE)
 		return 0;
 
@@ -412,12 +414,20 @@  int intel_guc_ucode_load(struct drm_device *dev, bool wait)
 
 	guc_fw->uc_fw_load_status = INTEL_UC_FIRMWARE_SUCCESS;
 
+	if (i915.enable_guc_submission) {
+		err = i915_guc_submission_enable(dev);
+		if (err)
+			goto fail;
+	}
+
 	return 0;
 
 fail:
 	if (guc_fw->uc_fw_load_status == INTEL_UC_FIRMWARE_PENDING)
 		guc_fw->uc_fw_load_status = INTEL_UC_FIRMWARE_FAIL;
 
+	i915_guc_submission_disable(dev);
+
 	DRM_ERROR("Failed to initialize GuC, error %d\n", err);
 
 	return err;