From patchwork Fri May 19 13:23:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Micha=C5=82_Winiarski?= X-Patchwork-Id: 9737221 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 57E7D6020B for ; Fri, 19 May 2017 13:24:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 51D4B288AD for ; Fri, 19 May 2017 13:24:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 448C5288E8; Fri, 19 May 2017 13:24:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 41E2B288AD for ; Fri, 19 May 2017 13:24:46 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BCFCA6E68C; Fri, 19 May 2017 13:24:44 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id C5B7F6E68C for ; Fri, 19 May 2017 13:24:42 +0000 (UTC) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP; 19 May 2017 06:24:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.38,364,1491289200"; d="scan'208";a="264005782" Received: from irsmsx102.ger.corp.intel.com ([163.33.3.155]) by fmsmga004.fm.intel.com with ESMTP; 19 May 2017 06:24:41 -0700 Received: from mwiniars-main.igk.intel.com (172.28.171.152) by IRSMSX102.ger.corp.intel.com (163.33.3.155) with Microsoft SMTP Server id 14.3.319.2; Fri, 19 May 2017 14:24:40 +0100 From: =?UTF-8?q?Micha=C5=82=20Winiarski?= To: Date: Fri, 19 May 2017 15:23:17 +0200 Message-ID: <20170519132321.11953-2-michal.winiarski@intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: <20170519132321.11953-1-michal.winiarski@intel.com> References: <20170519132321.11953-1-michal.winiarski@intel.com> MIME-Version: 1.0 X-Originating-IP: [172.28.171.152] Subject: [Intel-gfx] [PATCH v2 2/6] drm/i915/guc: Submit GuC workitems containing coalesced requests X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP To create an upper bound on number of GuC workitems, we need to change the way that requests are being submitted. Rather than submitting each request as an individual workitem, we can do coalescing in a similar way we're handlig execlist submission ports. We also need to stop pretending that we're doing "lite-restore" in GuC submission (we would create a workitem each time we hit this condition). v2: Also coalesce when replaying on reset (Daniele) Cc: Chris Wilson Cc: Daniele Ceraolo Spurio Cc: Michal Wajdeczko Cc: Oscar Mateo Signed-off-by: MichaƂ Winiarski --- drivers/gpu/drm/i915/i915_guc_submission.c | 68 ++++++++++++++++-------------- 1 file changed, 36 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c index b3da056..a586998 100644 --- a/drivers/gpu/drm/i915/i915_guc_submission.c +++ b/drivers/gpu/drm/i915/i915_guc_submission.c @@ -580,7 +580,7 @@ static int guc_ring_doorbell(struct i915_guc_client *client) } /** - * __i915_guc_submit() - Submit commands through GuC + * i915_guc_submit() - Submit commands through GuC * @rq: request associated with the commands * * The caller must have already called i915_guc_wq_reserve() above with @@ -594,7 +594,7 @@ static int guc_ring_doorbell(struct i915_guc_client *client) * The only error here arises if the doorbell hardware isn't functioning * as expected, which really shouln't happen. */ -static void __i915_guc_submit(struct drm_i915_gem_request *rq) +static void i915_guc_submit(struct drm_i915_gem_request *rq) { struct drm_i915_private *dev_priv = rq->i915; struct intel_engine_cs *engine = rq->engine; @@ -624,12 +624,6 @@ static void __i915_guc_submit(struct drm_i915_gem_request *rq) spin_unlock_irqrestore(&client->wq_lock, flags); } -static void i915_guc_submit(struct drm_i915_gem_request *rq) -{ - __i915_gem_request_submit(rq); - __i915_guc_submit(rq); -} - static void nested_enable_signaling(struct drm_i915_gem_request *rq) { /* If we use dma_fence_enable_sw_signaling() directly, lockdep @@ -665,12 +659,15 @@ static void port_assign(struct execlist_port *port, nested_enable_signaling(rq); } -static bool i915_guc_dequeue(struct intel_engine_cs *engine) +static void i915_guc_dequeue(struct intel_engine_cs *engine) { struct execlist_port *port = engine->execlist_port; - struct drm_i915_gem_request *last = port_request(port); - struct rb_node *rb; + struct drm_i915_gem_request *last = NULL; bool submit = false; + struct rb_node *rb; + + if (port_request(port)) + port++; spin_lock_irq(&engine->timeline->lock); rb = engine->execlist_first; @@ -689,12 +686,16 @@ static bool i915_guc_dequeue(struct intel_engine_cs *engine) port_assign(port, last); port++; + if (submit) { + i915_guc_submit(last); + submit = false; + } } INIT_LIST_HEAD(&rq->priotree.link); rq->priotree.priority = INT_MAX; - i915_guc_submit(rq); + __i915_gem_request_submit(rq); trace_i915_gem_request_in(rq, port_index(port, engine)); last = rq; submit = true; @@ -708,11 +709,11 @@ static bool i915_guc_dequeue(struct intel_engine_cs *engine) } done: engine->execlist_first = rb; - if (submit) + if (submit) { port_assign(port, last); + i915_guc_submit(last); + } spin_unlock_irq(&engine->timeline->lock); - - return submit; } static void i915_guc_irq_handler(unsigned long data) @@ -720,24 +721,20 @@ static void i915_guc_irq_handler(unsigned long data) struct intel_engine_cs *engine = (struct intel_engine_cs *)data; struct execlist_port *port = engine->execlist_port; struct drm_i915_gem_request *rq; - bool submit; - do { - rq = port_request(&port[0]); - while (rq && i915_gem_request_completed(rq)) { - trace_i915_gem_request_out(rq); - i915_gem_request_put(rq); + rq = port_request(&port[0]); + while (rq && i915_gem_request_completed(rq)) { + trace_i915_gem_request_out(rq); + i915_gem_request_put(rq); - port[0] = port[1]; - memset(&port[1], 0, sizeof(port[1])); + port[0] = port[1]; + memset(&port[1], 0, sizeof(port[1])); - rq = port_request(&port[0]); - } + rq = port_request(&port[0]); + } - submit = false; - if (!port_count(&port[1])) - submit = i915_guc_dequeue(engine); - } while (submit); + if (!port_isset(&port[1])) + i915_guc_dequeue(engine); } /* @@ -1255,6 +1252,8 @@ int i915_guc_submission_enable(struct drm_i915_private *dev_priv) for_each_engine(engine, dev_priv, id) { const int wqi_size = sizeof(struct guc_wq_item); struct drm_i915_gem_request *rq; + struct execlist_port *port = engine->execlist_port; + int n; /* The tasklet was initialised by execlists, and may be in * a state of flux (across a reset) and so we just want to @@ -1266,11 +1265,16 @@ int i915_guc_submission_enable(struct drm_i915_private *dev_priv) /* Replay the current set of previously submitted requests */ spin_lock_irq(&engine->timeline->lock); - list_for_each_entry(rq, &engine->timeline->requests, link) { + list_for_each_entry(rq, &engine->timeline->requests, link) guc_client_update_wq_rsvd(client, wqi_size); - __i915_guc_submit(rq); - } spin_unlock_irq(&engine->timeline->lock); + + for (n = 0; n < ARRAY_SIZE(engine->execlist_port); n++) { + if (!port_isset(&port[n])) + break; + + i915_guc_submit(port_request(&port[n])); + } } return 0;