From patchwork Wed Aug 5 05:52:55 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: sourab.gupta@intel.com X-Patchwork-Id: 6946291 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C41FEC05AC for ; Wed, 5 Aug 2015 05:51:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A06D72041D for ; Wed, 5 Aug 2015 05:51:05 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 78B3420412 for ; Wed, 5 Aug 2015 05:51:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F19D572087; Tue, 4 Aug 2015 22:51:03 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTP id 9A81472087 for ; Tue, 4 Aug 2015 22:51:02 -0700 (PDT) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP; 04 Aug 2015 22:51:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,614,1432623600"; d="scan'208";a="742449658" Received: from sourabgu-desktop.iind.intel.com ([10.223.82.35]) by orsmga001.jf.intel.com with ESMTP; 04 Aug 2015 22:50:59 -0700 From: sourab.gupta@intel.com To: intel-gfx@lists.freedesktop.org Date: Wed, 5 Aug 2015 11:22:55 +0530 Message-Id: <1438753977-20335-7-git-send-email-sourab.gupta@intel.com> X-Mailer: git-send-email 1.8.5.1 In-Reply-To: <1438753977-20335-1-git-send-email-sourab.gupta@intel.com> References: <1438753977-20335-1-git-send-email-sourab.gupta@intel.com> Cc: Insoo Woo , Peter Zijlstra , Jabin Wu , Sourab Gupta Subject: [Intel-gfx] [RFC 6/8] drm/i915: Insert commands for capture of OA counters in the ring X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Sourab Gupta This patch adds the routines which insert commands for capturing OA snapshots into the ringbuffer of RCS engine. The command MI_REPORT_PERF_COUNT can be used to capture snapshots of OA counters, which is inserted at BB boundaries. While inserting the commands, we keep a reference of associated request. This will be released when we are forwarding the samples to userspace (or when the event is being destroyed). Also, an active reference of the destination buffer is taken here, so that we can be assured that the buffer is freed up only after GPU is done with it, even if the local reference of the buffer is released. v2: Changes (as suggested by Chris): - Passing in 'request' struct for emit report function - Removed multiple calls to i915_gem_obj_to_ggtt(). Keeping hold of pinned vma from start and using when required. - Better nomenclature, and error handling. Signed-off-by: Sourab Gupta --- drivers/gpu/drm/i915/i915_drv.h | 13 +++++ drivers/gpu/drm/i915/i915_gem_execbuffer.c | 4 ++ drivers/gpu/drm/i915/i915_oa_perf.c | 87 ++++++++++++++++++++++++++++++ 3 files changed, 104 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index d355691..5c15e30 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1661,6 +1661,11 @@ enum i915_oa_event_state { I915_OA_EVENT_STOPPED, }; +enum i915_profile_mode { + I915_PROFILE_OA = 0, + I915_PROFILE_MAX, +}; + struct i915_oa_rcs_node { struct list_head head; struct drm_i915_gem_request *req; @@ -1966,6 +1971,7 @@ struct drm_i915_private { struct { struct drm_i915_gem_object *obj; u32 gtt_offset; + struct i915_vma *vma; u8 *addr; int format; int format_size; @@ -1976,6 +1982,9 @@ struct drm_i915_private { struct work_struct forward_work; struct work_struct event_destroy_work; } oa_pmu; + + void (*emit_profiling_data[I915_PROFILE_MAX]) + (struct drm_i915_gem_request *req, u32 global_ctx_id); #endif /* Abstract the submission mechanism (legacy ringbuffer or execlists) away */ @@ -3156,6 +3165,8 @@ void i915_oa_context_pin_notify(struct drm_i915_private *dev_priv, struct intel_context *context); void i915_oa_context_unpin_notify(struct drm_i915_private *dev_priv, struct intel_context *context); +void i915_emit_profiling_data(struct drm_i915_gem_request *req, + u32 global_ctx_id); #else static inline void i915_oa_context_pin_notify(struct drm_i915_private *dev_priv, @@ -3163,6 +3174,8 @@ i915_oa_context_pin_notify(struct drm_i915_private *dev_priv, static inline void i915_oa_context_unpin_notify(struct drm_i915_private *dev_priv, struct intel_context *context) {} +void i915_emit_profiling_data(struct drm_i915_gem_request *req, + u32 global_ctx_id) {}; #endif /* i915_gem_evict.c */ diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 3336e1c..e58b10d 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -1317,6 +1317,8 @@ i915_gem_ringbuffer_submission(struct drm_device *dev, struct drm_file *file, goto error; } + i915_emit_profiling_data(intel_ring_get_request(ring), ctx->global_id); + exec_len = args->batch_len; if (cliprects) { for (i = 0; i < args->num_cliprects; i++) { @@ -1339,6 +1341,8 @@ i915_gem_ringbuffer_submission(struct drm_device *dev, struct drm_file *file, return ret; } + i915_emit_profiling_data(intel_ring_get_request(ring), ctx->global_id); + trace_i915_gem_ring_dispatch(intel_ring_get_request(ring), dispatch_flags); i915_gem_execbuffer_move_to_active(vmas, ring); diff --git a/drivers/gpu/drm/i915/i915_oa_perf.c b/drivers/gpu/drm/i915/i915_oa_perf.c index 554a9fa..e3bc8e0 100644 --- a/drivers/gpu/drm/i915/i915_oa_perf.c +++ b/drivers/gpu/drm/i915/i915_oa_perf.c @@ -25,6 +25,86 @@ static int hsw_perf_format_sizes[] = { 64 /* C4_B8_HSW */ }; +void i915_emit_profiling_data(struct drm_i915_gem_request *req, + u32 global_ctx_id) +{ + struct intel_engine_cs *ring = req->ring; + struct drm_i915_private *dev_priv = ring->dev->dev_private; + int i; + + for (i = I915_PROFILE_OA; i < I915_PROFILE_MAX; i++) { + if (dev_priv->emit_profiling_data[i]) + dev_priv->emit_profiling_data[i](req, global_ctx_id); + } +} + +/* + * Emits the commands to capture OA perf report, into the Render CS + */ +static void i915_oa_emit_perf_report(struct drm_i915_gem_request *req, + u32 global_ctx_id) +{ + struct intel_engine_cs *ring = req->ring; + struct drm_i915_private *dev_priv = ring->dev->dev_private; + struct drm_i915_gem_object *obj = dev_priv->oa_pmu.oa_rcs_buffer.obj; + struct i915_oa_rcs_node *entry; + unsigned long lock_flags; + u32 addr = 0; + int ret; + + /* OA counters are only supported on the render ring */ + if (ring->id != RCS) + return; + + entry = kzalloc(sizeof(*entry), GFP_KERNEL); + if (entry == NULL) { + DRM_ERROR("alloc failed\n"); + return; + } + + ret = intel_ring_begin(ring, 4); + if (ret) { + kfree(entry); + return; + } + + entry->ctx_id = global_ctx_id; + i915_gem_request_assign(&entry->req, ring->outstanding_lazy_request); + + spin_lock_irqsave(&dev_priv->oa_pmu.lock, lock_flags); + if (list_empty(&dev_priv->oa_pmu.node_list)) + entry->offset = 0; + else { + struct i915_oa_rcs_node *last_entry; + int max_offset = dev_priv->oa_pmu.oa_rcs_buffer.node_count * + dev_priv->oa_pmu.oa_rcs_buffer.node_size; + + last_entry = list_last_entry(&dev_priv->oa_pmu.node_list, + struct i915_oa_rcs_node, head); + entry->offset = last_entry->offset + + dev_priv->oa_pmu.oa_rcs_buffer.node_size; + + if (entry->offset > max_offset) + entry->offset = 0; + } + list_add_tail(&entry->head, &dev_priv->oa_pmu.node_list); + spin_unlock_irqrestore(&dev_priv->oa_pmu.lock, lock_flags); + + addr = dev_priv->oa_pmu.oa_rcs_buffer.gtt_offset + entry->offset; + + /* addr should be 64 byte aligned */ + BUG_ON(addr & 0x3f); + + intel_ring_emit(ring, MI_REPORT_PERF_COUNT | (1<<0)); + intel_ring_emit(ring, addr | MI_REPORT_PERF_COUNT_GGTT); + intel_ring_emit(ring, ring->outstanding_lazy_request->seqno); + intel_ring_emit(ring, MI_NOOP); + intel_ring_advance(ring); + + obj->base.write_domain = I915_GEM_DOMAIN_RENDER; + i915_vma_move_to_active(dev_priv->oa_pmu.oa_rcs_buffer.vma, ring); +} + static void forward_one_oa_snapshot_to_event(struct drm_i915_private *dev_priv, u8 *snapshot, struct perf_event *event) @@ -324,6 +404,7 @@ oa_rcs_buffer_destroy(struct drm_i915_private *i915) spin_lock(&i915->oa_pmu.lock); i915->oa_pmu.oa_rcs_buffer.obj = NULL; i915->oa_pmu.oa_rcs_buffer.gtt_offset = 0; + i915->oa_pmu.oa_rcs_buffer.vma = NULL; i915->oa_pmu.oa_rcs_buffer.addr = NULL; spin_unlock(&i915->oa_pmu.lock); } @@ -584,6 +665,7 @@ static int init_oa_rcs_buffer(struct perf_event *event) dev_priv->oa_pmu.oa_rcs_buffer.obj = bo; dev_priv->oa_pmu.oa_rcs_buffer.gtt_offset = i915_gem_obj_ggtt_offset(bo); + dev_priv->oa_pmu.oa_rcs_buffer.vma = i915_gem_obj_to_ggtt(bo); dev_priv->oa_pmu.oa_rcs_buffer.addr = vmap_oa_buffer(bo); INIT_LIST_HEAD(&dev_priv->oa_pmu.node_list); @@ -1006,6 +1088,10 @@ static void i915_oa_event_start(struct perf_event *event, int flags) dev_priv->oa_pmu.event_state = I915_OA_EVENT_STARTED; update_oacontrol(dev_priv); + if (dev_priv->oa_pmu.multiple_ctx_mode) + dev_priv->emit_profiling_data[I915_PROFILE_OA] = + i915_oa_emit_perf_report; + /* Reset the head ptr to ensure we don't forward reports relating * to a previous perf event */ oastatus1 = I915_READ(GEN7_OASTATUS1); @@ -1042,6 +1128,7 @@ static void i915_oa_event_stop(struct perf_event *event, int flags) spin_lock_irqsave(&dev_priv->oa_pmu.lock, lock_flags); + dev_priv->emit_profiling_data[I915_PROFILE_OA] = NULL; dev_priv->oa_pmu.event_state = I915_OA_EVENT_STOP_IN_PROGRESS; list_for_each_entry(entry, &dev_priv->oa_pmu.node_list, head) entry->discard = true;