From patchwork Wed Aug 12 14:43:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Gordon X-Patchwork-Id: 7002211 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C65CF9F344 for ; Wed, 12 Aug 2015 14:44:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E5FDF206DC for ; Wed, 12 Aug 2015 14:44:05 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id E886F206D8 for ; Wed, 12 Aug 2015 14:44:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 433D56EB86; Wed, 12 Aug 2015 07:44:04 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id 2459A6EB87 for ; Wed, 12 Aug 2015 07:44:03 -0700 (PDT) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 12 Aug 2015 07:44:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,661,1432623600"; d="scan'208";a="540491226" Received: from dsgordon-linux2.isw.intel.com ([10.102.226.88]) by FMSMGA003.fm.intel.com with ESMTP; 12 Aug 2015 07:44:01 -0700 From: Dave Gordon To: intel-gfx@lists.freedesktop.org Date: Wed, 12 Aug 2015 15:43:44 +0100 Message-Id: <1439390624-21724-10-git-send-email-david.s.gordon@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1439390624-21724-1-git-send-email-david.s.gordon@intel.com> References: <1439390624-21724-1-git-send-email-david.s.gordon@intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Subject: [Intel-gfx] [PATCH 9/9 v6] drm/i915: Debugfs interface for GuC submission statistics X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This provides a means of reading status and counts relating to GuC actions and submissions. v2: Remove surplus blank line in output [Chris Wilson] v5: Added GuC per-engine submission & seqno statistics v6: Add per-ring statistics to client, refactor client-dumper. Signed-off-by: Dave Gordon Signed-off-by: Alex Dai --- drivers/gpu/drm/i915/i915_debugfs.c | 76 +++++++++++++++++++++++++++++++++++++ 1 file changed, 76 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index cfddc9a..7a28de5 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -2412,6 +2412,81 @@ static int i915_guc_load_status_info(struct seq_file *m, void *data) return 0; } +static void i915_guc_client_info(struct seq_file *m, + struct drm_i915_private *dev_priv, + struct i915_guc_client *client) +{ + struct intel_engine_cs *ring; + uint64_t tot = 0; + uint32_t i; + + seq_printf(m, "\tPriority %d, GuC ctx index: %u, PD offset 0x%x\n", + client->priority, client->ctx_index, client->proc_desc_offset); + seq_printf(m, "\tDoorbell id %d, offset: 0x%x, cookie 0x%x\n", + client->doorbell_id, client->doorbell_offset, client->cookie); + seq_printf(m, "\tWQ size %d, offset: 0x%x, tail %d\n", + client->wq_size, client->wq_offset, client->wq_tail); + + seq_printf(m, "\tFailed to queue: %u\n", client->q_fail); + seq_printf(m, "\tFailed doorbell: %u\n", client->b_fail); + seq_printf(m, "\tLast submission result: %d\n", client->retcode); + + for_each_ring(ring, dev_priv, i) { + seq_printf(m, "\tSubmissions: %llu %s\n", + client->submissions[i], + ring->name); + tot += client->submissions[i]; + } + seq_printf(m, "\tTotal: %llu\n", tot); +} + +static int i915_guc_info(struct seq_file *m, void *data) +{ + struct drm_info_node *node = m->private; + struct drm_device *dev = node->minor->dev; + struct drm_i915_private *dev_priv = dev->dev_private; + struct intel_guc guc; + struct i915_guc_client client = { .client_obj = 0 }; + struct intel_engine_cs *ring; + enum intel_ring_id i; + u64 total = 0; + + if (!HAS_GUC_SCHED(dev_priv->dev)) + return 0; + + /* Take a local copy of the GuC data, so we can dump it at leisure */ + spin_lock(&dev_priv->guc.host2guc_lock); + guc = dev_priv->guc; + if (guc.execbuf_client) { + spin_lock(&guc.execbuf_client->wq_lock); + client = *guc.execbuf_client; + spin_unlock(&guc.execbuf_client->wq_lock); + } + spin_unlock(&dev_priv->guc.host2guc_lock); + + seq_printf(m, "GuC total action count: %llu\n", guc.action_count); + seq_printf(m, "GuC action failure count: %u\n", guc.action_fail); + seq_printf(m, "GuC last action command: 0x%x\n", guc.action_cmd); + seq_printf(m, "GuC last action status: 0x%x\n", guc.action_status); + seq_printf(m, "GuC last action error code: %d\n", guc.action_err); + + seq_printf(m, "\nGuC submissions:\n"); + for_each_ring(ring, dev_priv, i) { + seq_printf(m, "\t%-24s: %10llu, last seqno 0x%08x %9d\n", + ring->name, guc.submissions[i], + guc.last_seqno[i], guc.last_seqno[i]); + total += guc.submissions[i]; + } + seq_printf(m, "\t%s: %llu\n", "Total", total); + + seq_printf(m, "\nGuC execbuf client @ %p:\n", guc.execbuf_client); + i915_guc_client_info(m, dev_priv, &client); + + /* Add more as required ... */ + + return 0; +} + static int i915_guc_log_dump(struct seq_file *m, void *data) { struct drm_info_node *node = m->private; @@ -5099,6 +5174,7 @@ static const struct drm_info_list i915_debugfs_list[] = { {"i915_gem_hws_bsd", i915_hws_info, 0, (void *)VCS}, {"i915_gem_hws_vebox", i915_hws_info, 0, (void *)VECS}, {"i915_gem_batch_pool", i915_gem_batch_pool_info, 0}, + {"i915_guc_info", i915_guc_info, 0}, {"i915_guc_load_status", i915_guc_load_status_info, 0}, {"i915_guc_log_dump", i915_guc_log_dump, 0}, {"i915_frequency_info", i915_frequency_info, 0},