From patchwork Fri Sep 29 12:42:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 9977971 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B339E6037F for ; Fri, 29 Sep 2017 12:44:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A7D312985C for ; Fri, 29 Sep 2017 12:44:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9CBE22985D; Fri, 29 Sep 2017 12:44:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id ACAFC2985E for ; Fri, 29 Sep 2017 12:44:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4E3876EB62; Fri, 29 Sep 2017 12:44:25 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id CFA976EB40 for ; Fri, 29 Sep 2017 12:44:23 +0000 (UTC) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Sep 2017 05:44:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,452,1500966000"; d="scan'208";a="140889099" Received: from rosetta.fi.intel.com ([10.237.72.186]) by orsmga002.jf.intel.com with ESMTP; 29 Sep 2017 05:44:19 -0700 Received: by rosetta.fi.intel.com (Postfix, from userid 1000) id C593384005B; Fri, 29 Sep 2017 15:42:50 +0300 (EEST) From: Mika Kuoppala To: intel-gfx@lists.freedesktop.org Date: Fri, 29 Sep 2017 15:42:47 +0300 Message-Id: <20170929124249.10202-1-mika.kuoppala@intel.com> X-Mailer: git-send-email 2.11.0 MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 1/3] drm/i915: Introduce execlist_port_* accessors X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Instead of trusting that first available port is at index 0, use accessor to hide this. This is a preparation for a following patches where head can be at arbitrary location in the port array. v2: improved commit message, elsp_ready readability (Chris) v3: s/execlist_port_index/execlist_port (Chris) v4: rebase to new naming Cc: MichaƂ Winiarski Cc: Joonas Lahtinen Cc: Chris Wilson Signed-off-by: Mika Kuoppala --- drivers/gpu/drm/i915/i915_debugfs.c | 16 ++++++---- drivers/gpu/drm/i915/i915_gpu_error.c | 6 ++-- drivers/gpu/drm/i915/i915_guc_submission.c | 17 +++++------ drivers/gpu/drm/i915/i915_irq.c | 2 +- drivers/gpu/drm/i915/intel_engine_cs.c | 2 +- drivers/gpu/drm/i915/intel_lrc.c | 47 ++++++++++++++++------------- drivers/gpu/drm/i915/intel_ringbuffer.h | 48 ++++++++++++++++++++++++++---- 7 files changed, 92 insertions(+), 46 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index b4a6ac60e7c6..073bd0ac8f2a 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -3348,16 +3348,20 @@ static int i915_engine_info(struct seq_file *m, void *unused) rcu_read_lock(); for (idx = 0; idx < execlists_num_ports(execlists); idx++) { - unsigned int count; + const struct execlist_port *port; + unsigned int count, n; - rq = port_unpack(&execlists->port[idx], &count); + port = execlists_port(execlists, idx); + n = port_index(port, execlists); + + rq = port_unpack(port, &count); if (rq) { - seq_printf(m, "\t\tELSP[%d] count=%d, ", - idx, count); + seq_printf(m, "\t\tELSP[%d:%d] count=%d, ", + idx, n, count); print_request(m, rq, "rq: "); } else { - seq_printf(m, "\t\tELSP[%d] idle\n", - idx); + seq_printf(m, "\t\tELSP[%d:%d] idle\n", + idx, n); } } rcu_read_unlock(); diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index c14552ab270b..9f2145c6961d 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1332,11 +1332,13 @@ static void engine_record_requests(struct intel_engine_cs *engine, static void error_record_engine_execlists(struct intel_engine_cs *engine, struct drm_i915_error_engine *ee) { - const struct intel_engine_execlists * const execlists = &engine->execlists; + struct intel_engine_execlists * const execlists = &engine->execlists; unsigned int n; for (n = 0; n < execlists_num_ports(execlists); n++) { - struct drm_i915_gem_request *rq = port_request(&execlists->port[n]); + struct drm_i915_gem_request *rq; + + rq = port_request(execlists_port(execlists, n)); if (!rq) break; diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c index 04f1281d81a5..c6cd05a5347c 100644 --- a/drivers/gpu/drm/i915/i915_guc_submission.c +++ b/drivers/gpu/drm/i915/i915_guc_submission.c @@ -562,8 +562,7 @@ static void i915_guc_dequeue(struct intel_engine_cs *engine) struct intel_engine_execlists * const execlists = &engine->execlists; struct execlist_port *port = execlists->port; struct drm_i915_gem_request *last = NULL; - const struct execlist_port * const last_port = - &execlists->port[execlists->port_mask]; + const struct execlist_port * const last_port = execlists_port_tail(execlists); bool submit = false; struct rb_node *rb; @@ -587,7 +586,8 @@ static void i915_guc_dequeue(struct intel_engine_cs *engine) if (submit) port_assign(port, last); - port++; + + port = execlists_port_next(execlists, port); } INIT_LIST_HEAD(&rq->priotree.link); @@ -618,19 +618,18 @@ static void i915_guc_irq_handler(unsigned long data) { struct intel_engine_cs * const engine = (struct intel_engine_cs *)data; struct intel_engine_execlists * const execlists = &engine->execlists; - struct execlist_port *port = execlists->port; - const struct execlist_port * const last_port = - &execlists->port[execlists->port_mask]; + struct execlist_port *port = execlists_port_head(execlists); + const struct execlist_port * const last_port = execlists_port_tail(execlists); struct drm_i915_gem_request *rq; - rq = port_request(&port[0]); + rq = port_request(port); while (rq && i915_gem_request_completed(rq)) { trace_i915_gem_request_out(rq); i915_gem_request_put(rq); - execlists_port_complete(execlists, port); + port = execlists_port_complete(execlists, port); - rq = port_request(&port[0]); + rq = port_request(port); } if (!port_isset(last_port)) diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index efd7827ff181..b9d1f379c5a0 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -1382,7 +1382,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir, int test_shift) bool tasklet = false; if (iir & (GT_CONTEXT_SWITCH_INTERRUPT << test_shift)) { - if (port_count(&execlists->port[0])) { + if (port_count(execlists_port_head(execlists))) { __set_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted); tasklet = true; } diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c index a28e2a864cf1..3f857786e2ed 100644 --- a/drivers/gpu/drm/i915/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/intel_engine_cs.c @@ -1505,7 +1505,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine) return false; /* Both ports drained, no more ELSP submission? */ - if (port_request(&engine->execlists.port[0])) + if (port_request(execlists_port_head(&engine->execlists))) return false; /* ELSP is empty, but there are ready requests? */ diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 61cac26a8b05..cb7fb3c651ce 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -394,24 +394,26 @@ static u64 execlists_update_context(struct drm_i915_gem_request *rq) static void execlists_submit_ports(struct intel_engine_cs *engine) { - struct execlist_port *port = engine->execlists.port; + struct intel_engine_execlists * const execlists = &engine->execlists; u32 __iomem *elsp = engine->i915->regs + i915_mmio_reg_offset(RING_ELSP(engine)); unsigned int n; - for (n = execlists_num_ports(&engine->execlists); n--; ) { + for (n = execlists_num_ports(execlists); n--; ) { + struct execlist_port *port; struct drm_i915_gem_request *rq; unsigned int count; u64 desc; - rq = port_unpack(&port[n], &count); + port = execlists_port(execlists, n); + rq = port_unpack(port, &count); if (rq) { GEM_BUG_ON(count > !n); if (!count++) execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN); - port_set(&port[n], port_pack(rq, count)); + port_set(port, port_pack(rq, count)); desc = execlists_update_context(rq); - GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc)); + GEM_DEBUG_EXEC(port->context_id = upper_32_bits(desc)); } else { GEM_BUG_ON(!n); desc = 0; @@ -455,9 +457,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine) { struct drm_i915_gem_request *last; struct intel_engine_execlists * const execlists = &engine->execlists; - struct execlist_port *port = execlists->port; - const struct execlist_port * const last_port = - &execlists->port[execlists->port_mask]; + struct execlist_port *port = execlists_port_head(execlists); + const struct execlist_port * const last_port = execlists_port_tail(execlists); struct rb_node *rb; bool submit = false; @@ -541,7 +542,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine) if (submit) port_assign(port, last); - port++; + + port = execlists_port_next(execlists, port); GEM_BUG_ON(port_isset(port)); } @@ -572,7 +574,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine) } static void -execlist_cancel_port_requests(struct intel_engine_execlists *execlists) +execlists_cancel_port_requests(struct intel_engine_execlists *execlists) { struct execlist_port *port = execlists->port; unsigned int num_ports = ARRAY_SIZE(execlists->port); @@ -598,7 +600,7 @@ static void execlists_cancel_requests(struct intel_engine_cs *engine) spin_lock_irqsave(&engine->timeline->lock, flags); /* Cancel the requests on the HW and clear the ELSP tracker. */ - execlist_cancel_port_requests(execlists); + execlists_cancel_port_requests(execlists); /* Mark all executing requests as skipped. */ list_for_each_entry(rq, &engine->timeline->requests, link) { @@ -645,11 +647,12 @@ static void execlists_cancel_requests(struct intel_engine_cs *engine) spin_unlock_irqrestore(&engine->timeline->lock, flags); } -static bool execlists_elsp_ready(const struct intel_engine_cs *engine) +static bool execlists_elsp_ready(struct intel_engine_execlists * const execlists) { - const struct execlist_port *port = engine->execlists.port; + struct execlist_port * const port0 = execlists_port_head(execlists); + struct execlist_port * const port1 = execlists_port_next(execlists, port0); - return port_count(&port[0]) + port_count(&port[1]) < 2; + return port_count(port0) + port_count(port1) < 2; } /* @@ -660,7 +663,7 @@ static void intel_lrc_irq_handler(unsigned long data) { struct intel_engine_cs * const engine = (struct intel_engine_cs *)data; struct intel_engine_execlists * const execlists = &engine->execlists; - struct execlist_port *port = execlists->port; + struct execlist_port *port = execlists_port_head(execlists); struct drm_i915_private *dev_priv = engine->i915; /* We can skip acquiring intel_runtime_pm_get() here as it was taken @@ -758,7 +761,7 @@ static void intel_lrc_irq_handler(unsigned long data) trace_i915_gem_request_out(rq); i915_gem_request_put(rq); - execlists_port_complete(execlists, port); + port = execlists_port_complete(execlists, port); } else { port_set(port, port_pack(rq, count)); } @@ -775,7 +778,7 @@ static void intel_lrc_irq_handler(unsigned long data) } } - if (execlists_elsp_ready(engine)) + if (execlists_elsp_ready(execlists)) execlists_dequeue(engine); intel_uncore_forcewake_put(dev_priv, execlists->fw_domains); @@ -785,16 +788,18 @@ static void insert_request(struct intel_engine_cs *engine, struct i915_priotree *pt, int prio) { + struct intel_engine_execlists * const execlists = &engine->execlists; struct i915_priolist *p = lookup_priolist(engine, pt, prio); list_add_tail(&pt->link, &ptr_mask_bits(p, 1)->requests); - if (ptr_unmask_bits(p, 1) && execlists_elsp_ready(engine)) - tasklet_hi_schedule(&engine->execlists.irq_tasklet); + if (ptr_unmask_bits(p, 1) && execlists_elsp_ready(execlists)) + tasklet_hi_schedule(&execlists->irq_tasklet); } static void execlists_submit_request(struct drm_i915_gem_request *request) { struct intel_engine_cs *engine = request->engine; + struct intel_engine_execlists * const execlists = &engine->execlists; unsigned long flags; /* Will be called from irq-context when using foreign fences. */ @@ -802,7 +807,7 @@ static void execlists_submit_request(struct drm_i915_gem_request *request) insert_request(engine, &request->priotree, request->priotree.priority); - GEM_BUG_ON(!engine->execlists.first); + GEM_BUG_ON(!execlists->first); GEM_BUG_ON(list_empty(&request->priotree.link)); spin_unlock_irqrestore(&engine->timeline->lock, flags); @@ -1397,7 +1402,7 @@ static void reset_common_ring(struct intel_engine_cs *engine, * guessing the missed context-switch events by looking at what * requests were completed. */ - execlist_cancel_port_requests(execlists); + execlists_cancel_port_requests(execlists); /* Push back any incomplete requests for replay after the reset. */ list_for_each_entry_safe_reverse(rq, rn, diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 56d7ae9f298b..2e795b44a942 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -244,6 +244,11 @@ struct intel_engine_execlists { unsigned int port_mask; /** + * @port_head: first used execlist port + */ + unsigned int port_head; + + /** * @queue: queue of requests, in priority lists */ struct rb_root queue; @@ -524,16 +529,47 @@ execlists_num_ports(const struct intel_engine_execlists * const execlists) return execlists->port_mask + 1; } -static inline void +#define __port_n(start, n, mask) (((start) + (n)) & (mask)) +#define port_n(e, n) __port_n((e)->port_head, n, (e)->port_mask) + +/* Index starting from port_head */ +static inline struct execlist_port * +execlists_port(struct intel_engine_execlists * const execlists, + const unsigned int n) +{ + return &execlists->port[port_n(execlists, n)]; +} + +static inline struct execlist_port * +execlists_port_head(struct intel_engine_execlists * const execlists) +{ + return execlists_port(execlists, 0); +} + +static inline struct execlist_port * +execlists_port_tail(struct intel_engine_execlists * const execlists) +{ + return execlists_port(execlists, -1); +} + +static inline struct execlist_port * +execlists_port_next(struct intel_engine_execlists * const execlists, + const struct execlist_port * const port) +{ + const unsigned int n = port_index(port, execlists); + + return execlists_port(execlists, n + 1); +} + +static inline struct execlist_port * execlists_port_complete(struct intel_engine_execlists * const execlists, struct execlist_port * const port) { - const unsigned int m = execlists->port_mask; - - GEM_BUG_ON(port_index(port, execlists) != 0); + GEM_BUG_ON(port_index(port, execlists) != execlists->port_head); - memmove(port, port + 1, m * sizeof(struct execlist_port)); - memset(port + m, 0, sizeof(struct execlist_port)); + memset(port, 0, sizeof(struct execlist_port)); + execlists->port_head = port_n(execlists, 1); + return execlists_port_head(execlists); } static inline unsigned int