From patchwork Mon Jul 1 11:34:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lionel Landwerlin X-Patchwork-Id: 11025721 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A66FC138D for ; Mon, 1 Jul 2019 11:35:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9665728355 for ; Mon, 1 Jul 2019 11:35:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8A89628644; Mon, 1 Jul 2019 11:35:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B9A9D28355 for ; Mon, 1 Jul 2019 11:34:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3BE0B89BCD; Mon, 1 Jul 2019 11:34:59 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id C46E189CBA for ; Mon, 1 Jul 2019 11:34:57 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Jul 2019 04:34:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,439,1557212400"; d="scan'208";a="186455099" Received: from unknown (HELO delly.ger.corp.intel.com) ([10.249.141.63]) by fmsmga004.fm.intel.com with ESMTP; 01 Jul 2019 04:34:56 -0700 From: Lionel Landwerlin To: intel-gfx@lists.freedesktop.org Date: Mon, 1 Jul 2019 14:34:35 +0300 Message-Id: <20190701113437.4043-10-lionel.g.landwerlin@intel.com> X-Mailer: git-send-email 2.21.0.392.gf8f6787159e In-Reply-To: <20190701113437.4043-1-lionel.g.landwerlin@intel.com> References: <20190701113437.4043-1-lionel.g.landwerlin@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v6 09/11] drm/i915/perf: allow holding preemption on filtered ctx X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP We would like to make use of perf in Vulkan. The Vulkan API is much lower level than OpenGL, with applications directly exposed to the concept of command buffers (pretty much equivalent to our batch buffers). In Vulkan, queries are always limited in scope to a command buffer. In OpenGL, the lack of command buffer concept meant that queries' duration could span multiple command buffers. With that restriction gone in Vulkan, we would like to simplify measuring performance just by measuring the deltas between the counter snapshots written by 2 MI_RECORD_PERF_COUNT commands, rather than the more complex scheme we currently have in the GL driver, using 2 MI_RECORD_PERF_COUNT commands and doing some post processing on the stream of OA reports, coming from the global OA buffer, to remove any unrelated deltas in between the 2 MI_RECORD_PERF_COUNT. Disabling preemption only apply to a single context with which want to query performance counters for and is considered a privileged operation, by default protected by CAP_SYS_ADMIN. It is possible to enable it for a normal user by disabling the paranoid stream setting. v2: Store preemption setting in intel_context (Chris) v3: Use priorities to avoid preemption rather than the HW mechanism v4: Just modify the port priority reporting function Signed-off-by: Lionel Landwerlin Reviewed-by: Chris Wilson --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +++++++ drivers/gpu/drm/i915/gt/intel_lrc.c | 7 +++++- drivers/gpu/drm/i915/i915_drv.c | 2 +- drivers/gpu/drm/i915/i915_drv.h | 8 +++++++ drivers/gpu/drm/i915/i915_perf.c | 22 +++++++++++++++++-- drivers/gpu/drm/i915/i915_priolist_types.h | 7 ++++++ drivers/gpu/drm/i915/i915_request.c | 4 ++-- drivers/gpu/drm/i915/i915_request.h | 14 +++++++++++- drivers/gpu/drm/i915/intel_guc_submission.c | 10 ++++++++- drivers/gpu/drm/i915/intel_pm.c | 5 +++-- include/uapi/drm/i915_drm.h | 11 ++++++++++ 11 files changed, 88 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index f92bace9caff..012d6d7f54e2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -2104,6 +2104,14 @@ static int eb_oa_config(struct i915_execbuffer *eb) if (err) return err; + /* + * If the perf stream was opened with hold preemption, flag the + * request properly so that the priority of the request is bumped once + * it reaches the execlist ports. + */ + if (eb->i915->perf.oa.exclusive_stream->hold_preemption) + eb->request->flags |= I915_REQUEST_FLAGS_PERF; + /* * If the config hasn't changed, skip reconfiguring the HW (this is * subject to a delay we want to avoid has much as possible). diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index 001b696834f2..3ec7fe6d2790 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -256,7 +256,12 @@ static inline int rq_prio(const struct i915_request *rq) static int effective_prio(const struct i915_request *rq) { - int prio = rq_prio(rq); + int prio; + + if (i915_request_has_perf(rq)) + prio = I915_USER_PRIORITY(I915_PRIORITY_PERF); + else + prio = rq_prio(rq); /* * On unwinding the active request, we give it a priority bump diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index a08c6123bf38..6395deadf73f 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -485,7 +485,7 @@ static int i915_getparam_ioctl(struct drm_device *dev, void *data, value = INTEL_INFO(dev_priv)->has_coherent_ggtt; break; case I915_PARAM_PERF_REVISION: - value = 1; + value = 2; break; case I915_PARAM_HAS_EXEC_PERF_CONFIG: /* Obviously requires perf support. */ diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 664554114718..84f9dec59a19 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1232,6 +1232,14 @@ struct i915_perf_stream { */ bool enabled; + /** + * @hold_preemption: Whether preemption is put on hold for command + * submissions done on the @ctx. This is useful for some drivers that + * cannot easily post process the OA buffer context to subtract delta + * of performance counters not associated with @ctx. + */ + bool hold_preemption; + /** * @ops: The callbacks providing the implementation of this specific * type of configured stream. diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index 6b659b5f1948..272036316192 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -344,6 +344,8 @@ static const struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = { * struct perf_open_properties - for validated properties given to open a stream * @sample_flags: `DRM_I915_PERF_PROP_SAMPLE_*` properties are tracked as flags * @single_context: Whether a single or all gpu contexts should be monitored + * @hold_preemption: Whether the preemption is disabled for the filtered + * context * @ctx_handle: A gem ctx handle for use with @single_context * @metrics_set: An ID for an OA unit metric set advertised via sysfs * @oa_format: An OA unit HW report format @@ -358,6 +360,7 @@ struct perf_open_properties { u32 sample_flags; u64 single_context:1; + u64 hold_preemption:1; u64 ctx_handle; /* OA sampling state */ @@ -2399,6 +2402,8 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, stream->sample_flags |= SAMPLE_OA_REPORT; stream->sample_size += format_size; + stream->hold_preemption = props->hold_preemption; + dev_priv->perf.oa.oa_buffer.format_size = format_size; if (WARN_ON(dev_priv->perf.oa.oa_buffer.format_size == 0)) return -EINVAL; @@ -2937,6 +2942,15 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv, } } + if (props->hold_preemption) { + if (!props->single_context) { + DRM_DEBUG("preemption disable with no context\n"); + ret = -EINVAL; + goto err; + } + privileged_op = true; + } + /* * On Haswell the OA unit supports clock gating off for a specific * context and in this mode there's no visibility of metrics for the @@ -2951,8 +2965,9 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv, * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to * enable the OA unit by default. */ - if (IS_HASWELL(dev_priv) && specific_ctx) + if (IS_HASWELL(dev_priv) && specific_ctx && !props->hold_preemption) { privileged_op = false; + } /* Similar to perf's kernel.perf_paranoid_cpu sysctl option * we check a dev.i915.perf_stream_paranoid sysctl option @@ -2961,7 +2976,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv, */ if (privileged_op && i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) { - DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n"); + DRM_DEBUG("Insufficient privileges to open i915 perf stream\n"); ret = -EACCES; goto err_ctx; } @@ -3153,6 +3168,9 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv, props->oa_periodic = true; props->oa_period_exponent = value; break; + case DRM_I915_PERF_PROP_HOLD_PREEMPTION: + props->hold_preemption = !!value; + break; case DRM_I915_PERF_PROP_MAX: MISSING_CASE(id); return -EINVAL; diff --git a/drivers/gpu/drm/i915/i915_priolist_types.h b/drivers/gpu/drm/i915/i915_priolist_types.h index 49709de69875..bb9d1e1fb94a 100644 --- a/drivers/gpu/drm/i915/i915_priolist_types.h +++ b/drivers/gpu/drm/i915/i915_priolist_types.h @@ -17,6 +17,13 @@ enum { I915_PRIORITY_NORMAL = I915_CONTEXT_DEFAULT_PRIORITY, I915_PRIORITY_MAX = I915_CONTEXT_MAX_USER_PRIORITY + 1, + /* Requests containing performance queries must not be preempted by + * another context. They get scheduled with their default priority and + * once they reach the execlist ports we bump them to + * I915_PRIORITY_PERF so that they stick to the HW until they finish. + */ + I915_PRIORITY_PERF, + I915_PRIORITY_INVALID = INT_MIN }; diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 5ff87c4a0cd5..222c9c56e9de 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -292,7 +292,7 @@ static bool i915_request_retire(struct i915_request *rq) dma_fence_signal_locked(&rq->fence); if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &rq->fence.flags)) i915_request_cancel_breadcrumb(rq); - if (rq->waitboost) { + if (i915_request_has_waitboost(rq)) { GEM_BUG_ON(!atomic_read(&rq->i915->gt_pm.rps.num_waiters)); atomic_dec(&rq->i915->gt_pm.rps.num_waiters); } @@ -684,7 +684,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) rq->file_priv = NULL; rq->batch = NULL; rq->capture_list = NULL; - rq->waitboost = false; + rq->flags = 0; rq->execution_mask = ALL_ENGINES; INIT_LIST_HEAD(&rq->active_list); diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index b58ceef92e20..3bc2a3b8b9ca 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -216,7 +216,9 @@ struct i915_request { /** Time at which this request was emitted, in jiffies. */ unsigned long emitted_jiffies; - bool waitboost; +#define I915_REQUEST_FLAGS_WAITBOOST BIT(0) +#define I915_REQUEST_FLAGS_PERF BIT(1) + u32 flags; /** timeline->request entry for this request */ struct list_head link; @@ -430,6 +432,16 @@ static inline void i915_request_mark_complete(struct i915_request *rq) rq->hwsp_seqno = (u32 *)&rq->fence.seqno; /* decouple from HWSP */ } +static inline bool i915_request_has_waitboost(const struct i915_request *rq) +{ + return rq->flags & I915_REQUEST_FLAGS_WAITBOOST; +} + +static inline bool i915_request_has_perf(const struct i915_request *rq) +{ + return rq->flags & I915_REQUEST_FLAGS_PERF; +} + bool i915_retire_requests(struct drm_i915_private *i915); #endif /* I915_REQUEST_H */ diff --git a/drivers/gpu/drm/i915/intel_guc_submission.c b/drivers/gpu/drm/i915/intel_guc_submission.c index 12c22359fdac..48e7ae3d67a2 100644 --- a/drivers/gpu/drm/i915/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/intel_guc_submission.c @@ -707,6 +707,14 @@ static inline int rq_prio(const struct i915_request *rq) return rq->sched.attr.priority | __NO_PREEMPTION; } +static inline int effective_prio(const struct i915_request *rq) +{ + if (i915_request_has_perf(rq)) + return I915_USER_PRIORITY(I915_PRIORITY_PERF) | __NO_PREEMPTION; + + return rq_prio(rq); +} + static struct i915_request *schedule_in(struct i915_request *rq, int idx) { trace_i915_request_in(rq, idx); @@ -747,7 +755,7 @@ static void __guc_dequeue(struct intel_engine_cs *engine) &engine->i915->guc.preempt_work[engine->id]; int prio = execlists->queue_priority_hint; - if (i915_scheduler_need_preempt(prio, rq_prio(last))) { + if (i915_scheduler_need_preempt(prio, effective_prio(last))) { intel_write_status_page(engine, I915_GEM_HWS_PREEMPT, GUC_PREEMPT_INPROGRESS); diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index d9a7a13ce32a..491251419c67 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -6891,9 +6891,10 @@ void gen6_rps_boost(struct i915_request *rq) /* Serializes with i915_request_retire() */ boost = false; spin_lock_irqsave(&rq->lock, flags); - if (!rq->waitboost && !dma_fence_is_signaled_locked(&rq->fence)) { + if (!i915_request_has_waitboost(rq) && + !dma_fence_is_signaled_locked(&rq->fence)) { boost = !atomic_fetch_inc(&rps->num_waiters); - rq->waitboost = true; + rq->flags |= I915_REQUEST_FLAGS_WAITBOOST; } spin_unlock_irqrestore(&rq->lock, flags); if (!boost) diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 96bc09af8cc4..f2b7a6c1e870 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -1983,6 +1983,17 @@ enum drm_i915_perf_property_id { */ DRM_I915_PERF_PROP_OA_EXPONENT, + /** + * Specifying this property is only valid when specify a context to + * filter with DRM_I915_PERF_PROP_CTX_HANDLE. Specifying this property + * will hold preemption of the particular context we want to gather + * performance data about. The execbuf2 submissions must include a + * drm_i915_gem_execbuffer_ext_perf parameter for this to apply. + * + * This property is available in perf revision 2. + */ + DRM_I915_PERF_PROP_HOLD_PREEMPTION, + DRM_I915_PERF_PROP_MAX /* non-ABI */ };