From patchwork Mon May 4 11:12:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lionel Landwerlin X-Patchwork-Id: 11525503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8CFF3139A for ; Mon, 4 May 2020 11:12:58 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 74F5E20735 for ; Mon, 4 May 2020 11:12:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 74F5E20735 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6A97289FC3; Mon, 4 May 2020 11:12:57 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id D24AA89FBC for ; Mon, 4 May 2020 11:12:55 +0000 (UTC) IronPort-SDR: d2B22/LcQ9jRx/Vy0JUWs3TLUyHJwzsNdzleNlRy+9zHTEV/vfN60dZvlGTbDHM8qM9IPtDwW8 vqDOB4Z44bOQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2020 04:12:55 -0700 IronPort-SDR: HE31NFYVJ6Fm8xa33j9m6lYO2YwVPXEWggYQF7BWtTu8oQXbpfEaU7VSiCNDhRCH0I0gwfi0X0 X8TBy+RKPyeQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,351,1583222400"; d="scan'208";a="248188624" Received: from efilatov-mobl.ger.corp.intel.com (HELO delly.ger.corp.intel.com) ([10.252.56.163]) by orsmga007.jf.intel.com with ESMTP; 04 May 2020 04:12:54 -0700 From: Lionel Landwerlin To: intel-gfx@lists.freedesktop.org Date: Mon, 4 May 2020 14:12:46 +0300 Message-Id: <20200504111249.1367096-2-lionel.g.landwerlin@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200504111249.1367096-1-lionel.g.landwerlin@intel.com> References: <20200504111249.1367096-1-lionel.g.landwerlin@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v12 1/4] drm/i915/perf: break OA config buffer object in 2 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We want to enable performance monitoring on multiple contexts to cover the Iris use case of using 2 GEM contexts (3D & compute). So start by breaking the OA configuration BO which contains global & per context register writes. NOA muxes & OA configurations are global, while FLEXEU register configurations are per context. v2: Use an offset into the same VMA (Chris) v3: Use a bitfield to select config parts to emit (Umesh) Signed-off-by: Lionel Landwerlin Reviewed-by: Chris Wilson --- drivers/gpu/drm/i915/i915_perf.c | 177 ++++++++++++++++++++----------- 1 file changed, 114 insertions(+), 63 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index c533f569dd42..c17696058c20 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -367,11 +367,18 @@ struct perf_open_properties { u64 poll_oa_period; }; +enum i915_oa_config_part { + I915_OA_CONFIG_PART_PER_CONTEXT, + I915_OA_CONFIG_PART_GLOBAL, + I915_OA_CONFIG_PART_MAX, +}; + struct i915_oa_config_bo { struct llist_node node; struct i915_oa_config *oa_config; struct i915_vma *vma; + u32 per_context_offset; }; static struct ctl_table_header *sysctl_header; @@ -1824,37 +1831,43 @@ static struct i915_oa_config_bo * alloc_oa_config_buffer(struct i915_perf_stream *stream, struct i915_oa_config *oa_config) { - struct drm_i915_gem_object *obj; struct i915_oa_config_bo *oa_bo; + struct drm_i915_gem_object *obj; size_t config_length = 0; - u32 *cs; + u32 *cs_start, *cs; int err; oa_bo = kzalloc(sizeof(*oa_bo), GFP_KERNEL); if (!oa_bo) return ERR_PTR(-ENOMEM); + /* + * Global configuration requires a jump into the NOA wait BO for it to + * apply. + */ config_length += num_lri_dwords(oa_config->mux_regs_len); config_length += num_lri_dwords(oa_config->b_counter_regs_len); - config_length += num_lri_dwords(oa_config->flex_regs_len); config_length += 3; /* MI_BATCH_BUFFER_START */ + + config_length += num_lri_dwords(oa_config->flex_regs_len); + config_length += 1 /* MI_BATCH_BUFFER_END */; + config_length = ALIGN(sizeof(u32) * config_length, I915_GTT_PAGE_SIZE); - obj = i915_gem_object_create_shmem(stream->perf->i915, config_length); + obj = i915_gem_object_create_shmem(stream->perf->i915, + config_length); if (IS_ERR(obj)) { err = PTR_ERR(obj); goto err_free; } - cs = i915_gem_object_pin_map(obj, I915_MAP_WB); - if (IS_ERR(cs)) { - err = PTR_ERR(cs); - goto err_oa_bo; + cs_start = i915_gem_object_pin_map(obj, I915_MAP_WB); + if (IS_ERR(cs_start)) { + err = PTR_ERR(cs_start); + goto err_bo; } - cs = write_cs_mi_lri(cs, - oa_config->mux_regs, - oa_config->mux_regs_len); + cs = cs_start; cs = write_cs_mi_lri(cs, oa_config->b_counter_regs, oa_config->b_counter_regs_len); @@ -1869,6 +1882,14 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream, *cs++ = i915_ggtt_offset(stream->noa_wait); *cs++ = 0; + oa_bo->per_context_offset = 4 * (cs - cs_start); + + cs = write_cs_mi_lri(cs, + oa_config->mux_regs, + oa_config->mux_regs_len); + + *cs++ = MI_BATCH_BUFFER_END; + i915_gem_object_flush_map(obj); i915_gem_object_unpin_map(obj); @@ -1877,7 +1898,7 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream, NULL); if (IS_ERR(oa_bo->vma)) { err = PTR_ERR(oa_bo->vma); - goto err_oa_bo; + goto err_bo; } oa_bo->oa_config = i915_oa_config_get(oa_config); @@ -1885,15 +1906,15 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream, return oa_bo; -err_oa_bo: +err_bo: i915_gem_object_put(obj); err_free: kfree(oa_bo); return ERR_PTR(err); } -static struct i915_vma * -get_oa_vma(struct i915_perf_stream *stream, struct i915_oa_config *oa_config) +static struct i915_oa_config_bo * +get_oa_bo(struct i915_perf_stream *stream, struct i915_oa_config *oa_config) { struct i915_oa_config_bo *oa_bo; @@ -1906,34 +1927,31 @@ get_oa_vma(struct i915_perf_stream *stream, struct i915_oa_config *oa_config) memcmp(oa_bo->oa_config->uuid, oa_config->uuid, sizeof(oa_config->uuid)) == 0) - goto out; + return oa_bo; } - oa_bo = alloc_oa_config_buffer(stream, oa_config); - if (IS_ERR(oa_bo)) - return ERR_CAST(oa_bo); - -out: - return i915_vma_get(oa_bo->vma); + return alloc_oa_config_buffer(stream, oa_config); } static int emit_oa_config(struct i915_perf_stream *stream, struct i915_oa_config *oa_config, struct intel_context *ce, - struct i915_active *active) + struct i915_active *active, + unsigned long config_part_mask) { + struct i915_oa_config_bo *oa_bo; struct i915_request *rq; - struct i915_vma *vma; + enum i915_oa_config_part config_part; int err; - vma = get_oa_vma(stream, oa_config); - if (IS_ERR(vma)) - return PTR_ERR(vma); + oa_bo = get_oa_bo(stream, oa_config); + if (IS_ERR(oa_bo)) + return PTR_ERR(oa_bo); - err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL | PIN_HIGH); + err = i915_vma_pin(oa_bo->vma, 0, 0, PIN_GLOBAL | PIN_HIGH); if (err) - goto err_vma_put; + return err; intel_engine_pm_get(ce->engine); rq = i915_request_create(ce); @@ -1955,26 +1973,41 @@ emit_oa_config(struct i915_perf_stream *stream, goto err_add_request; } - i915_vma_lock(vma); - err = i915_request_await_object(rq, vma->obj, 0); + i915_vma_lock(oa_bo->vma); + err = i915_request_await_object(rq, oa_bo->vma->obj, 0); if (!err) - err = i915_vma_move_to_active(vma, rq, 0); - i915_vma_unlock(vma); + err = i915_vma_move_to_active(oa_bo->vma, rq, 0); + i915_vma_unlock(oa_bo->vma); if (err) goto err_add_request; - err = rq->engine->emit_bb_start(rq, - vma->node.start, 0, - I915_DISPATCH_SECURE); - if (err) - goto err_add_request; + for_each_set_bit(config_part, &config_part_mask, + I915_OA_CONFIG_PART_MAX) { + u64 vma_offset; + + switch (config_part) { + case I915_OA_CONFIG_PART_PER_CONTEXT: + vma_offset = oa_bo->vma->node.start; + break; + case I915_OA_CONFIG_PART_GLOBAL: + vma_offset = oa_bo->vma->node.start + + oa_bo->per_context_offset; + break; + default: + MISSING_CASE(config_part); + goto err_add_request; + } + + err = rq->engine->emit_bb_start(rq, vma_offset, 0, + I915_DISPATCH_SECURE); + if (err) + goto err_add_request; + } err_add_request: i915_request_add(rq); err_vma_unpin: - i915_vma_unpin(vma); -err_vma_put: - i915_vma_put(vma); + i915_vma_unpin(oa_bo->vma); return err; } @@ -2004,9 +2037,11 @@ hsw_enable_metric_set(struct i915_perf_stream *stream, intel_uncore_rmw(uncore, GEN6_UCGCTL1, 0, GEN6_CSUNIT_CLOCK_GATE_DISABLE); - return emit_oa_config(stream, - stream->oa_config, oa_context(stream), - active); + return emit_oa_config(stream, stream->oa_config, + oa_context(stream), + active, + BIT(I915_OA_CONFIG_PART_GLOBAL) | + BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); } static void hsw_disable_metric_set(struct i915_perf_stream *stream) @@ -2417,7 +2452,7 @@ gen8_enable_metric_set(struct i915_perf_stream *stream, { struct intel_uncore *uncore = stream->uncore; struct i915_oa_config *oa_config = stream->oa_config; - int ret; + int err; /* * We disable slice/unslice clock ratio change reports on SKL since @@ -2453,13 +2488,15 @@ gen8_enable_metric_set(struct i915_perf_stream *stream, * to make sure all slices/subslices are ON before writing to NOA * registers. */ - ret = lrc_configure_all_contexts(stream, oa_config, active); - if (ret) - return ret; + err = lrc_configure_all_contexts(stream, oa_config, active); + if (err) + return err; - return emit_oa_config(stream, - stream->oa_config, oa_context(stream), - active); + return emit_oa_config(stream, oa_config, + oa_context(stream), + active, + BIT(I915_OA_CONFIG_PART_GLOBAL) | + BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); } static u32 oag_report_ctx_switches(const struct i915_perf_stream *stream) @@ -2505,9 +2542,9 @@ gen12_enable_metric_set(struct i915_perf_stream *stream, return ret; /* - * For Gen12, performance counters are context - * saved/restored. Only enable it for the context that - * requested this. + * For Gen12, performance counters are also context saved/restored on + * another set of performance registers. Configure the unit dealing + * with those. */ if (stream->ctx) { ret = gen12_configure_oar_context(stream, active); @@ -2515,9 +2552,11 @@ gen12_enable_metric_set(struct i915_perf_stream *stream, return ret; } - return emit_oa_config(stream, - stream->oa_config, oa_context(stream), - active); + return emit_oa_config(stream, oa_config, + oa_context(stream), + active, + BIT(I915_OA_CONFIG_PART_GLOBAL) | + BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); } static void gen8_disable_metric_set(struct i915_perf_stream *stream) @@ -3172,6 +3211,7 @@ static long i915_perf_config_locked(struct i915_perf_stream *stream, unsigned long metrics_set) { struct i915_oa_config *config; + struct i915_active *active = NULL; long ret = stream->oa_config->id; config = i915_perf_get_oa_config(stream->perf, metrics_set); @@ -3179,7 +3219,11 @@ static long i915_perf_config_locked(struct i915_perf_stream *stream, return -EINVAL; if (config != stream->oa_config) { - int err; + active = i915_active_create(); + if (!active) { + ret = -ENOMEM; + goto err_config; + } /* * If OA is bound to a specific context, emit the @@ -3190,13 +3234,20 @@ static long i915_perf_config_locked(struct i915_perf_stream *stream, * When set globally, we use a low priority kernel context, * so it will effectively take effect when idle. */ - err = emit_oa_config(stream, config, oa_context(stream), NULL); - if (!err) - config = xchg(&stream->oa_config, config); - else - ret = err; + ret = emit_oa_config(stream, config, + oa_context(stream), + active, + BIT(I915_OA_CONFIG_PART_GLOBAL) | + BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); + if (ret) + goto err_active; + + config = xchg(&stream->oa_config, config); } +err_active: + i915_active_put(active); +err_config: i915_oa_config_put(config); return ret; From patchwork Mon May 4 11:12:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lionel Landwerlin X-Patchwork-Id: 11525505 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DDAD6139A for ; Mon, 4 May 2020 11:12:59 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C62E220735 for ; Mon, 4 May 2020 11:12:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C62E220735 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AFA1289FC9; Mon, 4 May 2020 11:12:58 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 95BAA89FC5 for ; Mon, 4 May 2020 11:12:57 +0000 (UTC) IronPort-SDR: KpgaS86CbG0ESXXzpvhQVBYFejUOIkdSP7v3MQVKUY7WZq0n9/eZEDMsg2IMWTDbIUruGfnGL6 whbnidh851ZQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2020 04:12:57 -0700 IronPort-SDR: x5/FNt16EByQWBLBuJBSqsT7KXR1FHCpWWIaGae02fUlnyGzQlvI5we9Ozf2+xCw5eZS8W/RMj YOIOEsv10M0Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,351,1583222400"; d="scan'208";a="248188632" Received: from efilatov-mobl.ger.corp.intel.com (HELO delly.ger.corp.intel.com) ([10.252.56.163]) by orsmga007.jf.intel.com with ESMTP; 04 May 2020 04:12:55 -0700 From: Lionel Landwerlin To: intel-gfx@lists.freedesktop.org Date: Mon, 4 May 2020 14:12:47 +0300 Message-Id: <20200504111249.1367096-3-lionel.g.landwerlin@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200504111249.1367096-1-lionel.g.landwerlin@intel.com> References: <20200504111249.1367096-1-lionel.g.landwerlin@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v12 2/4] drm/i915/perf: stop using the kernel context X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Chris doesn't like that. v2: Don't forget to configure the kernel so that periodic reports are written appropriately (Lionel) v3: Keep the configuration context pinned for the lifecycle of i915_perf_stream (Chris) v4: drop intel_context_types.h include (Chris) drop empty line Signed-off-by: Lionel Landwerlin Reviewed-by: Chris Wilson --- drivers/gpu/drm/i915/i915_perf.c | 152 +++++++++++++++++-------- drivers/gpu/drm/i915/i915_perf_types.h | 9 +- 2 files changed, 111 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index c17696058c20..67257bf9768c 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -1354,9 +1354,31 @@ free_noa_wait(struct i915_perf_stream *stream) i915_vma_unpin_and_release(&stream->noa_wait, 0); } +static int i915_perf_stream_sync(struct i915_perf_stream *stream, + bool enable) +{ + struct i915_active *active; + int err = 0; + + active = i915_active_create(); + if (!active) + return -ENOMEM; + + if (enable) + err = stream->perf->ops.enable_metric_set(stream, active); + else + stream->perf->ops.disable_metric_set(stream, active); + if (err == 0) + __i915_active_wait(active, TASK_UNINTERRUPTIBLE); + + i915_active_put(active); + return err; +} + static void i915_oa_stream_destroy(struct i915_perf_stream *stream) { struct i915_perf *perf = stream->perf; + int err; BUG_ON(stream != perf->exclusive_stream); @@ -1367,7 +1389,14 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream) * See i915_oa_init_reg_state() and lrc_configure_all_contexts() */ WRITE_ONCE(perf->exclusive_stream, NULL); - perf->ops.disable_metric_set(stream); + err = i915_perf_stream_sync(stream, false /* enable */); + if (err) { + drm_err(&perf->i915->drm, + "Error while disabling OA stream\n"); + } + + intel_context_unpin(stream->config_context); + intel_context_put(stream->config_context); free_oa_buffer(stream); @@ -2011,11 +2040,6 @@ emit_oa_config(struct i915_perf_stream *stream, return err; } -static struct intel_context *oa_context(struct i915_perf_stream *stream) -{ - return stream->pinned_ctx ?: stream->engine->kernel_context; -} - static int hsw_enable_metric_set(struct i915_perf_stream *stream, struct i915_active *active) @@ -2038,13 +2062,14 @@ hsw_enable_metric_set(struct i915_perf_stream *stream, 0, GEN6_CSUNIT_CLOCK_GATE_DISABLE); return emit_oa_config(stream, stream->oa_config, - oa_context(stream), + stream->config_context, active, BIT(I915_OA_CONFIG_PART_GLOBAL) | BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); } -static void hsw_disable_metric_set(struct i915_perf_stream *stream) +static void hsw_disable_metric_set(struct i915_perf_stream *stream, + struct i915_active *active) { struct intel_uncore *uncore = stream->uncore; @@ -2169,13 +2194,14 @@ gen8_load_flex(struct i915_request *rq, return 0; } -static int gen8_modify_context(struct intel_context *ce, +static int gen8_modify_context(struct i915_perf_stream *stream, + struct intel_context *ce, const struct flex *flex, unsigned int count) { struct i915_request *rq; int err; - rq = intel_engine_create_kernel_request(ce->engine); + rq = intel_context_create_request(stream->config_context); if (IS_ERR(rq)) return PTR_ERR(rq); @@ -2217,7 +2243,8 @@ gen8_modify_self(struct intel_context *ce, return err; } -static int gen8_configure_context(struct i915_gem_context *ctx, +static int gen8_configure_context(struct i915_perf_stream *stream, + struct i915_gem_context *ctx, struct flex *flex, unsigned int count) { struct i915_gem_engines_iter it; @@ -2235,7 +2262,7 @@ static int gen8_configure_context(struct i915_gem_context *ctx, continue; flex->value = intel_sseu_make_rpcs(ctx->i915, &ce->sseu); - err = gen8_modify_context(ce, flex, count); + err = gen8_modify_context(stream, ce, flex, count); intel_context_unpin(ce); if (err) @@ -2285,7 +2312,7 @@ static int gen12_configure_oar_context(struct i915_perf_stream *stream, if (err) return err; - err = gen8_modify_context(ce, regs_context, ARRAY_SIZE(regs_context)); + err = gen8_modify_context(stream, ce, regs_context, ARRAY_SIZE(regs_context)); intel_context_unlock_pinned(ce); if (err) return err; @@ -2328,6 +2355,7 @@ oa_configure_all_contexts(struct i915_perf_stream *stream, struct drm_i915_private *i915 = stream->perf->i915; struct intel_engine_cs *engine; struct i915_gem_context *ctx, *cn; + struct intel_context *kernel_context; int err; lockdep_assert_held(&stream->perf->lock); @@ -2355,7 +2383,7 @@ oa_configure_all_contexts(struct i915_perf_stream *stream, spin_unlock(&i915->gem.contexts.lock); - err = gen8_configure_context(ctx, regs, num_regs); + err = gen8_configure_context(stream, ctx, regs, num_regs); if (err) { i915_gem_context_put(ctx); return err; @@ -2368,12 +2396,23 @@ oa_configure_all_contexts(struct i915_perf_stream *stream, spin_unlock(&i915->gem.contexts.lock); /* - * After updating all other contexts, we need to modify ourselves. - * If we don't modify the kernel_context, we do not get events while - * idle. + * Modify the kernel context has this is where we're parked, we want + * the periodic ticking on idle to be consistent with what the perf + * stream was configured with. + */ + kernel_context = stream->engine->kernel_context; + regs[0].value = intel_sseu_make_rpcs(i915, &kernel_context->sseu); + err = gen8_modify_context(stream, kernel_context, regs, num_regs); + if (err) + return err; + + /* + * After updating all other contexts, we need to modify ourselves. If + * we don't modify the stream->perf_context, we do not get events + * while idle. */ for_each_uabi_engine(engine, i915) { - struct intel_context *ce = engine->kernel_context; + struct intel_context *ce = stream->config_context; if (engine->class != RENDER_CLASS) continue; @@ -2492,8 +2531,8 @@ gen8_enable_metric_set(struct i915_perf_stream *stream, if (err) return err; - return emit_oa_config(stream, oa_config, - oa_context(stream), + return emit_oa_config(stream, stream->oa_config, + stream->config_context, active, BIT(I915_OA_CONFIG_PART_GLOBAL) | BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); @@ -2552,44 +2591,47 @@ gen12_enable_metric_set(struct i915_perf_stream *stream, return ret; } - return emit_oa_config(stream, oa_config, - oa_context(stream), + return emit_oa_config(stream, stream->oa_config, + stream->config_context, active, BIT(I915_OA_CONFIG_PART_GLOBAL) | BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); } -static void gen8_disable_metric_set(struct i915_perf_stream *stream) +static void gen8_disable_metric_set(struct i915_perf_stream *stream, + struct i915_active *active) { struct intel_uncore *uncore = stream->uncore; /* Reset all contexts' slices/subslices configurations. */ - lrc_configure_all_contexts(stream, NULL, NULL); + lrc_configure_all_contexts(stream, NULL, active); intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0); } -static void gen10_disable_metric_set(struct i915_perf_stream *stream) +static void gen10_disable_metric_set(struct i915_perf_stream *stream, + struct i915_active *active) { struct intel_uncore *uncore = stream->uncore; /* Reset all contexts' slices/subslices configurations. */ - lrc_configure_all_contexts(stream, NULL, NULL); + lrc_configure_all_contexts(stream, NULL, active); /* Make sure we disable noa to save power. */ intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); } -static void gen12_disable_metric_set(struct i915_perf_stream *stream) +static void gen12_disable_metric_set(struct i915_perf_stream *stream, + struct i915_active *active) { struct intel_uncore *uncore = stream->uncore; /* Reset all contexts' slices/subslices configurations. */ - gen12_configure_all_contexts(stream, NULL, NULL); + gen12_configure_all_contexts(stream, NULL, active); /* disable the context save/restore or OAR counters */ if (stream->ctx) - gen12_configure_oar_context(stream, NULL); + gen12_configure_oar_context(stream, active); /* Make sure we disable noa to save power. */ intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); @@ -2761,23 +2803,6 @@ static const struct i915_perf_stream_ops i915_oa_stream_ops = { .read = i915_oa_read, }; -static int i915_perf_stream_enable_sync(struct i915_perf_stream *stream) -{ - struct i915_active *active; - int err; - - active = i915_active_create(); - if (!active) - return -ENOMEM; - - err = stream->perf->ops.enable_metric_set(stream, active); - if (err == 0) - __i915_active_wait(active, TASK_UNINTERRUPTIBLE); - - i915_active_put(active); - return err; -} - static void get_default_sseu_config(struct intel_sseu *out_sseu, struct intel_engine_cs *engine) @@ -2835,6 +2860,7 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, { struct drm_i915_private *i915 = stream->perf->i915; struct i915_perf *perf = stream->perf; + struct intel_timeline *timeline; int format_size; int ret; @@ -2944,10 +2970,30 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, stream->ops = &i915_oa_stream_ops; + timeline = intel_timeline_create(stream->engine->gt, NULL); + if (IS_ERR(timeline)) { + ret = PTR_ERR(timeline); + goto err_timeline; + } + + stream->config_context = intel_context_create(stream->engine); + if (IS_ERR(stream->config_context)) { + intel_timeline_put(timeline); + ret = PTR_ERR(stream->config_context); + goto err_timeline; + } + + stream->config_context->sseu = props->sseu; + stream->config_context->timeline = timeline; + + ret = intel_context_pin(stream->config_context); + if (ret) + goto err_context_pin; + perf->sseu = props->sseu; WRITE_ONCE(perf->exclusive_stream, stream); - ret = i915_perf_stream_enable_sync(stream); + ret = i915_perf_stream_sync(stream, true /* enable */); if (ret) { DRM_DEBUG("Unable to enable metric set\n"); goto err_enable; @@ -2966,8 +3012,14 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, err_enable: WRITE_ONCE(perf->exclusive_stream, NULL); - perf->ops.disable_metric_set(stream); + i915_perf_stream_sync(stream, false /* enable */); + intel_context_unpin(stream->config_context); + +err_context_pin: + intel_context_put(stream->config_context); + +err_timeline: free_oa_buffer(stream); err_oa_buf_alloc: @@ -3219,6 +3271,8 @@ static long i915_perf_config_locked(struct i915_perf_stream *stream, return -EINVAL; if (config != stream->oa_config) { + struct intel_context *ce = stream->pinned_ctx ?: stream->config_context; + active = i915_active_create(); if (!active) { ret = -ENOMEM; @@ -3235,7 +3289,7 @@ static long i915_perf_config_locked(struct i915_perf_stream *stream, * so it will effectively take effect when idle. */ ret = emit_oa_config(stream, config, - oa_context(stream), + ce, active, BIT(I915_OA_CONFIG_PART_GLOBAL) | BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); diff --git a/drivers/gpu/drm/i915/i915_perf_types.h b/drivers/gpu/drm/i915/i915_perf_types.h index a36a455ae336..c0a78166b1d9 100644 --- a/drivers/gpu/drm/i915/i915_perf_types.h +++ b/drivers/gpu/drm/i915/i915_perf_types.h @@ -311,6 +311,12 @@ struct i915_perf_stream { * buffer should be checked for available data. */ u64 poll_oa_period; + + /** + * @config_context: A logical context for use by the perf stream for + * configuring the HW. + */ + struct intel_context *config_context; }; /** @@ -348,7 +354,8 @@ struct i915_oa_ops { * @disable_metric_set: Remove system constraints associated with using * the OA unit. */ - void (*disable_metric_set)(struct i915_perf_stream *stream); + void (*disable_metric_set)(struct i915_perf_stream *stream, + struct i915_active *active); /** * @oa_enable: Enable periodic sampling From patchwork Mon May 4 11:12:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lionel Landwerlin X-Patchwork-Id: 11525507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24FBE92A for ; Mon, 4 May 2020 11:13:02 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0D53A2073B for ; Mon, 4 May 2020 11:13:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D53A2073B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 96B1589FCC; Mon, 4 May 2020 11:13:01 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id BB67A89FCC for ; Mon, 4 May 2020 11:12:59 +0000 (UTC) IronPort-SDR: xAQspaeRyjxrWom88s0hiFm6Wqjhb8IrcgZBz27Vq3sl5SkfrOKprCen81EHu8qP37ZM8u7A6v E76q5/M4wZxw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2020 04:12:59 -0700 IronPort-SDR: ejUbMajpVcQzC+pMWbj9sSNvS0N7kgP/oPdy+44RdMFDBNrlUzuXdnc1ZcehKtnMD2Focg+7QK g3IPL9SGlGUw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,351,1583222400"; d="scan'208";a="248188636" Received: from efilatov-mobl.ger.corp.intel.com (HELO delly.ger.corp.intel.com) ([10.252.56.163]) by orsmga007.jf.intel.com with ESMTP; 04 May 2020 04:12:57 -0700 From: Lionel Landwerlin To: intel-gfx@lists.freedesktop.org Date: Mon, 4 May 2020 14:12:48 +0300 Message-Id: <20200504111249.1367096-4-lionel.g.landwerlin@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200504111249.1367096-1-lionel.g.landwerlin@intel.com> References: <20200504111249.1367096-1-lionel.g.landwerlin@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v12 3/4] drm/i915/perf: prepare driver to receive multiple ctx handles X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Make all the internal necessary changes before we flip the switch. v2: Use an unlimited number of intel contexts (Chris) v3: Handle GEM context with multiple RCS0 logical contexts (Chris) v4: Let the intel_context create its own timeline (Chris) Only pin configuration context when needed (Chris) v5: Pass filtering context ID by argument (Chris) Signed-off-by: Lionel Landwerlin Reviewed-by: Chris Wilson Reported-by: kbuild test robot Reported-by: Dan Carpenter --- drivers/gpu/drm/i915/i915_perf.c | 565 +++++++++++++++---------- drivers/gpu/drm/i915/i915_perf_types.h | 37 +- 2 files changed, 361 insertions(+), 241 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index 67257bf9768c..66d52ee4767b 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -192,6 +192,7 @@ */ #include +#include #include #include @@ -329,7 +330,8 @@ static const struct i915_oa_format gen12_oa_formats[I915_OA_FORMAT_MAX] = { * @single_context: Whether a single or all gpu contexts should be monitored * @hold_preemption: Whether the preemption is disabled for the filtered * context - * @ctx_handle: A gem ctx handle for use with @single_context + * @n_ctx_handles: Length of @ctx_handles + * @ctx_handles: An array of gem context handles * @metrics_set: An ID for an OA unit metric set advertised via sysfs * @oa_format: An OA unit HW report format * @oa_periodic: Whether to enable periodic OA unit sampling @@ -349,9 +351,10 @@ static const struct i915_oa_format gen12_oa_formats[I915_OA_FORMAT_MAX] = { struct perf_open_properties { u32 sample_flags; - u64 single_context:1; u64 hold_preemption:1; - u64 ctx_handle; + + u32 n_ctx_handles; + u32 *ctx_handles; /* OA sampling state */ int metrics_set; @@ -631,6 +634,23 @@ static int append_oa_sample(struct i915_perf_stream *stream, return 0; } +static int ctx_id_equal(const void *key, const void *elem) +{ + const struct i915_perf_context_detail *details = elem; + + return ((int)details->id) - (uintptr_t)key; +} + +static inline bool ctx_id_match(struct i915_perf_stream *stream, + u32 masked_ctx_id) +{ + return bsearch((void *)(uintptr_t)masked_ctx_id, + stream->pinned_ctxs, + stream->n_pinned_ctxs, + sizeof(*stream->pinned_ctxs), + ctx_id_equal) != NULL; +} + /** * Copies all buffered OA reports into userspace read() buffer. * @stream: An i915-perf stream opened for OA metrics @@ -742,7 +762,7 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream, continue; } - ctx_id = report32[2] & stream->specific_ctx_id_mask; + ctx_id = report32[2] & stream->ctx_id_mask; /* * Squash whatever is in the CTX_ID field if it's marked as @@ -787,26 +807,32 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream, * switches since it's not-uncommon for periodic samples to * identify a switch before any 'context switch' report. */ - if (!stream->perf->exclusive_stream->ctx || - stream->specific_ctx_id == ctx_id || - stream->oa_buffer.last_ctx_id == stream->specific_ctx_id || - reason & OAREPORT_REASON_CTX_SWITCH) { - - /* - * While filtering for a single context we avoid - * leaking the IDs of other contexts. - */ - if (stream->perf->exclusive_stream->ctx && - stream->specific_ctx_id != ctx_id) { - report32[2] = INVALID_CTX_ID; - } - + if (!stream->perf->exclusive_stream->n_ctxs) { ret = append_oa_sample(stream, buf, count, offset, report); if (ret) break; + } else { + bool ctx_match = ctx_id != INVALID_CTX_ID && + ctx_id_match(stream, ctx_id); + + if (ctx_match || + stream->oa_buffer.last_ctx_match || + reason & OAREPORT_REASON_CTX_SWITCH) { + /* + * While filtering for a single context we avoid + * leaking the IDs of other contexts. + */ + if (!ctx_match) + report32[2] = INVALID_CTX_ID; + + ret = append_oa_sample(stream, buf, count, offset, + report); + if (ret) + break; + } - stream->oa_buffer.last_ctx_id = ctx_id; + stream->oa_buffer.last_ctx_match = ctx_match; } /* @@ -1197,136 +1223,174 @@ static int i915_oa_read(struct i915_perf_stream *stream, return stream->perf->ops.read(stream, buf, count, offset); } -static struct intel_context *oa_pin_context(struct i915_perf_stream *stream) +static u32 get_ctx_id_mask(struct intel_engine_cs *engine) { - struct i915_gem_engines_iter it; - struct i915_gem_context *ctx = stream->ctx; - struct intel_context *ce; - int err; + switch (INTEL_GEN(engine->i915)) { + case 7: + /* + * On Haswell we don't do any post processing of the reports + * and don't need to use the mask. + */ + return 0; - for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) { - if (ce->engine != stream->engine) /* first match! */ - continue; + case 8: + case 9: + case 10: + if (intel_engine_in_execlists_submission_mode(engine)) + return (1U << GEN8_CTX_ID_WIDTH) - 1; /* - * As the ID is the gtt offset of the context's vma we - * pin the vma to ensure the ID remains fixed. + * GuC uses the top bit to signal proxy submission, so ignore + * that bit. */ - err = intel_context_pin(ce); - if (err == 0) { - stream->pinned_ctx = ce; - break; - } - } - i915_gem_context_unlock_engines(ctx); + return (1U << (GEN8_CTX_ID_WIDTH - 1)) - 1; + + case 11: + case 12: + /* + * 0x7ff is used by idle context. + */ + return ((1U << GEN11_SW_CTX_ID_WIDTH) - 1) << (GEN11_SW_CTX_ID_SHIFT - 32); - return stream->pinned_ctx; + default: + MISSING_CASE(INTEL_GEN(engine->i915)); + return 0; + } } -/** - * oa_get_render_ctx_id - determine and hold ctx hw id - * @stream: An i915-perf stream opened for OA metrics - * - * Determine the render context hw id, and ensure it remains fixed for the - * lifetime of the stream. This ensures that we don't have to worry about - * updating the context ID in OACONTROL on the fly. - * - * Returns: zero on success or a negative error code - */ -static int oa_get_render_ctx_id(struct i915_perf_stream *stream) +static u32 get_ctx_id(struct intel_context *ce, int idx) { - struct intel_context *ce; - - ce = oa_pin_context(stream); - if (IS_ERR(ce)) - return PTR_ERR(ce); switch (INTEL_GEN(ce->engine->i915)) { - case 7: { - /* - * On Haswell we don't do any post processing of the reports - * and don't need to use the mask. - */ - stream->specific_ctx_id = i915_ggtt_offset(ce->state); - stream->specific_ctx_id_mask = 0; - break; - } + case 7: + return i915_ggtt_offset(ce->state); case 8: case 9: case 10: - if (intel_engine_in_execlists_submission_mode(ce->engine)) { - stream->specific_ctx_id_mask = - (1U << GEN8_CTX_ID_WIDTH) - 1; - stream->specific_ctx_id = stream->specific_ctx_id_mask; - } else { - /* - * When using GuC, the context descriptor we write in - * i915 is read by GuC and rewritten before it's - * actually written into the hardware. The LRCA is - * what is put into the context id field of the - * context descriptor by GuC. Because it's aligned to - * a page, the lower 12bits are always at 0 and - * dropped by GuC. They won't be part of the context - * ID in the OA reports, so squash those lower bits. - */ - stream->specific_ctx_id = ce->lrc.lrca >> 12; + if (intel_engine_in_execlists_submission_mode(ce->engine)) + return (1U << GEN8_CTX_ID_WIDTH) - 1 - idx; - /* - * GuC uses the top bit to signal proxy submission, so - * ignore that bit. - */ - stream->specific_ctx_id_mask = - (1U << (GEN8_CTX_ID_WIDTH - 1)) - 1; - } - break; + /* + * When using GuC, the context descriptor we write in i915 is + * read by GuC and rewritten before it's actually written into + * the hardware. The LRCA is what is put into the context id + * field of the context descriptor by GuC. Because it's + * aligned to a page, the lower 12bits are always at 0 and + * dropped by GuC. They won't be part of the context ID in the + * OA reports, so squash those lower bits. + */ + return ce->lrc.lrca >> 12; case 11: - case 12: { - stream->specific_ctx_id_mask = - ((1U << GEN11_SW_CTX_ID_WIDTH) - 1) << (GEN11_SW_CTX_ID_SHIFT - 32); + case 12: /* * Pick an unused context id * 0 - BITS_PER_LONG are used by other contexts * GEN12_MAX_CONTEXT_HW_ID (0x7ff) is used by idle context */ - stream->specific_ctx_id = (GEN12_MAX_CONTEXT_HW_ID - 1) << (GEN11_SW_CTX_ID_SHIFT - 32); - break; - } + return (GEN12_MAX_CONTEXT_HW_ID - 1 - idx) << (GEN11_SW_CTX_ID_SHIFT - 32); default: MISSING_CASE(INTEL_GEN(ce->engine->i915)); + return 0; } - - ce->tag = stream->specific_ctx_id; - - drm_dbg(&stream->perf->i915->drm, - "filtering on ctx_id=0x%x ctx_id_mask=0x%x\n", - stream->specific_ctx_id, - stream->specific_ctx_id_mask); - - return 0; } /** - * oa_put_render_ctx_id - counterpart to oa_get_render_ctx_id releases hold + * oa_put_render_ctx_id - counterpart to oa_get_render_ctx_ids releases hold * @stream: An i915-perf stream opened for OA metrics * * In case anything needed doing to ensure the context HW ID would remain valid * for the lifetime of the stream, then that can be undone here. */ -static void oa_put_render_ctx_id(struct i915_perf_stream *stream) +static void oa_put_render_ctx_ids(struct i915_perf_stream *stream) +{ + int i; + + for (i = 0; i < stream->n_pinned_ctxs; i++) { + struct intel_context *ce; + + ce = fetch_and_zero(&stream->pinned_ctxs[i].ce); + if (ce) { + ce->tag = 0; /* recomputed on next submission after parking */ + intel_context_unpin(ce); + } + + stream->pinned_ctxs[i].id = INVALID_CTX_ID; + } + + stream->ctx_id_mask = 0; + stream->n_pinned_ctxs = 0; + + kfree(stream->pinned_ctxs); +} + +static int oa_get_render_ctx_ids(struct i915_perf_stream *stream) { struct intel_context *ce; + int i, err = 0; + u32 n_allocated_ctxs = 0; - ce = fetch_and_zero(&stream->pinned_ctx); - if (ce) { - ce->tag = 0; /* recomputed on next submission after parking */ - intel_context_unpin(ce); + stream->ctx_id_mask = get_ctx_id_mask(stream->engine); + + for (i = 0; i < stream->n_ctxs; i++) { + struct i915_gem_context *ctx = stream->ctxs[i]; + struct i915_gem_engines_iter it; + + for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) { + if (ce->engine != stream->engine) /* first match! */ + continue; + + /* + * As the ID is the gtt offset of the context's vma we + * pin the vma to ensure the ID remains fixed. + */ + err = intel_context_pin(ce); + if (err) { + i915_gem_context_unlock_engines(ctx); + break; + } + + if (stream->n_pinned_ctxs >= n_allocated_ctxs) { + u32 new_allocated_len = max(n_allocated_ctxs * 2, 2u); + struct i915_perf_context_detail *new_ctxs = + krealloc(stream->pinned_ctxs, + sizeof(*stream->pinned_ctxs) * + new_allocated_len, + GFP_KERNEL); + + if (!new_ctxs) { + err = -ENOMEM; + break; + } + + n_allocated_ctxs = new_allocated_len; + stream->pinned_ctxs = new_ctxs; + } + + stream->pinned_ctxs[stream->n_pinned_ctxs].ce = ce; + stream->pinned_ctxs[stream->n_pinned_ctxs].id = get_ctx_id(ce, i); + + drm_dbg(&stream->perf->i915->drm, + "filtering on ctx_id%i=0x%x ctx_id_mask=0x%x\n", + i, stream->pinned_ctxs[i].id, stream->ctx_id_mask); + + ce->tag = stream->pinned_ctxs[stream->n_pinned_ctxs].id; + + stream->n_pinned_ctxs++; + } + i915_gem_context_unlock_engines(ctx); + if (err) + goto err; } - stream->specific_ctx_id = INVALID_CTX_ID; - stream->specific_ctx_id_mask = 0; + return 0; + +err: + oa_put_render_ctx_ids(stream); + + return err; } static void @@ -1389,10 +1453,13 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream) * See i915_oa_init_reg_state() and lrc_configure_all_contexts() */ WRITE_ONCE(perf->exclusive_stream, NULL); - err = i915_perf_stream_sync(stream, false /* enable */); - if (err) { - drm_err(&perf->i915->drm, - "Error while disabling OA stream\n"); + + if (!err) { + err = i915_perf_stream_sync(stream, false /* enable */); + if (err) { + drm_err(&perf->i915->drm, + "Error while disabling OA stream\n"); + } } intel_context_unpin(stream->config_context); @@ -1403,8 +1470,7 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream) intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL); intel_engine_pm_put(stream->engine); - if (stream->ctx) - oa_put_render_ctx_id(stream); + oa_put_render_ctx_ids(stream); free_oa_configs(stream); free_noa_wait(stream); @@ -1496,7 +1562,7 @@ static void gen8_init_oa_buffer(struct i915_perf_stream *stream) * reports we will forward to userspace while filtering for a single * context. */ - stream->oa_buffer.last_ctx_id = INVALID_CTX_ID; + stream->oa_buffer.last_ctx_match = false; spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); @@ -1550,7 +1616,7 @@ static void gen12_init_oa_buffer(struct i915_perf_stream *stream) * reports we will forward to userspace while filtering for a single * context. */ - stream->oa_buffer.last_ctx_id = INVALID_CTX_ID; + stream->oa_buffer.last_ctx_match = false; spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); @@ -2273,11 +2339,10 @@ static int gen8_configure_context(struct i915_perf_stream *stream, return err; } -static int gen12_configure_oar_context(struct i915_perf_stream *stream, - struct i915_active *active) +static int gen12_configure_oar_contexts(struct i915_perf_stream *stream, + struct i915_active *active) { - int err; - struct intel_context *ce = stream->pinned_ctx; + int i; u32 format = stream->oa_buffer.format; struct flex regs_context[] = { { @@ -2298,7 +2363,7 @@ static int gen12_configure_oar_context(struct i915_perf_stream *stream, (active ? GEN12_OAR_OACONTROL_COUNTER_ENABLE : 0) }, { - RING_CONTEXT_CONTROL(ce->engine->mmio_base), + RING_CONTEXT_CONTROL(stream->engine->mmio_base), CTX_CONTEXT_CONTROL, _MASKED_FIELD(GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE, active ? @@ -2307,18 +2372,28 @@ static int gen12_configure_oar_context(struct i915_perf_stream *stream, }, }; - /* Modify the context image of pinned context with regs_context*/ - err = intel_context_lock_pinned(ce); - if (err) - return err; + for (i = 0; i < stream->n_pinned_ctxs; i++) { + struct intel_context *ce = stream->pinned_ctxs[i].ce; + int err; - err = gen8_modify_context(stream, ce, regs_context, ARRAY_SIZE(regs_context)); - intel_context_unlock_pinned(ce); - if (err) - return err; + /* Modify the context image of pinned context with regs_context*/ + err = intel_context_lock_pinned(ce); + if (err) + return err; - /* Apply regs_lri using LRI with pinned context */ - return gen8_modify_self(ce, regs_lri, ARRAY_SIZE(regs_lri), active); + err = gen8_modify_context(stream, ce, regs_context, ARRAY_SIZE(regs_context)); + intel_context_unlock_pinned(ce); + if (err) + return err; + + /* Apply regs_lri using LRI with pinned context */ + err = gen8_modify_self(ce, regs_lri, ARRAY_SIZE(regs_lri), + active); + if (err) + return err; + } + + return 0; } /* @@ -2585,11 +2660,9 @@ gen12_enable_metric_set(struct i915_perf_stream *stream, * another set of performance registers. Configure the unit dealing * with those. */ - if (stream->ctx) { - ret = gen12_configure_oar_context(stream, active); - if (ret) - return ret; - } + ret = gen12_configure_oar_contexts(stream, active); + if (ret) + return ret; return emit_oa_config(stream, stream->oa_config, stream->config_context, @@ -2630,8 +2703,7 @@ static void gen12_disable_metric_set(struct i915_perf_stream *stream, gen12_configure_all_contexts(stream, NULL, active); /* disable the context save/restore or OAR counters */ - if (stream->ctx) - gen12_configure_oar_context(stream, active); + gen12_configure_oar_contexts(stream, active); /* Make sure we disable noa to save power. */ intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); @@ -2640,8 +2712,7 @@ static void gen12_disable_metric_set(struct i915_perf_stream *stream, static void gen7_oa_enable(struct i915_perf_stream *stream) { struct intel_uncore *uncore = stream->uncore; - struct i915_gem_context *ctx = stream->ctx; - u32 ctx_id = stream->specific_ctx_id; + u32 ctx_id = stream->n_pinned_ctxs ? stream->pinned_ctxs[0].id : 0; bool periodic = stream->periodic; u32 period_exponent = stream->period_exponent; u32 report_format = stream->oa_buffer.format; @@ -2663,7 +2734,7 @@ static void gen7_oa_enable(struct i915_perf_stream *stream) GEN7_OACONTROL_TIMER_PERIOD_SHIFT) | (periodic ? GEN7_OACONTROL_TIMER_ENABLE : 0) | (report_format << GEN7_OACONTROL_FORMAT_SHIFT) | - (ctx ? GEN7_OACONTROL_PER_CTX_ENABLE : 0) | + (stream->n_ctxs ? GEN7_OACONTROL_PER_CTX_ENABLE : 0) | GEN7_OACONTROL_ENABLE); } @@ -2880,7 +2951,7 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, } if (!(props->sample_flags & SAMPLE_OA_REPORT) && - (INTEL_GEN(perf->i915) < 12 || !stream->ctx)) { + (INTEL_GEN(perf->i915) < 12 || !stream->n_ctxs)) { DRM_DEBUG("Only OA report sampling supported\n"); return -EINVAL; } @@ -2928,12 +2999,10 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, if (stream->periodic) stream->period_exponent = props->oa_period_exponent; - if (stream->ctx) { - ret = oa_get_render_ctx_id(stream); - if (ret) { - DRM_DEBUG("Invalid context id to filter with\n"); - return ret; - } + ret = oa_get_render_ctx_ids(stream); + if (ret) { + DRM_DEBUG("Invalid context id to filter with\n"); + return ret; } ret = alloc_noa_wait(stream); @@ -2970,21 +3039,14 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, stream->ops = &i915_oa_stream_ops; - timeline = intel_timeline_create(stream->engine->gt, NULL); - if (IS_ERR(timeline)) { - ret = PTR_ERR(timeline); - goto err_timeline; - } - stream->config_context = intel_context_create(stream->engine); if (IS_ERR(stream->config_context)) { intel_timeline_put(timeline); ret = PTR_ERR(stream->config_context); - goto err_timeline; + goto err_context; } stream->config_context->sseu = props->sseu; - stream->config_context->timeline = timeline; ret = intel_context_pin(stream->config_context); if (ret) @@ -3019,7 +3081,7 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, err_context_pin: intel_context_put(stream->config_context); -err_timeline: +err_context: free_oa_buffer(stream); err_oa_buf_alloc: @@ -3032,8 +3094,7 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream, free_noa_wait(stream); err_noa_wait_alloc: - if (stream->ctx) - oa_put_render_ctx_id(stream); + oa_put_render_ctx_ids(stream); return ret; } @@ -3226,8 +3287,12 @@ static void i915_perf_enable_locked(struct i915_perf_stream *stream) if (stream->ops->enable) stream->ops->enable(stream); - if (stream->hold_preemption) - intel_context_set_nopreempt(stream->pinned_ctx); + if (stream->hold_preemption) { + int i; + + for (i = 0; i < stream->n_pinned_ctxs; i++) + intel_context_set_nopreempt(stream->pinned_ctxs[i].ce); + } } /** @@ -3252,8 +3317,12 @@ static void i915_perf_disable_locked(struct i915_perf_stream *stream) /* Allow stream->ops->disable() to refer to this */ stream->enabled = false; - if (stream->hold_preemption) - intel_context_clear_nopreempt(stream->pinned_ctx); + if (stream->hold_preemption) { + int i; + + for (i = 0; i < stream->n_pinned_ctxs; i++) + intel_context_clear_nopreempt(stream->pinned_ctxs[i].ce); + } if (stream->ops->disable) stream->ops->disable(stream); @@ -3271,7 +3340,7 @@ static long i915_perf_config_locked(struct i915_perf_stream *stream, return -EINVAL; if (config != stream->oa_config) { - struct intel_context *ce = stream->pinned_ctx ?: stream->config_context; + int i; active = i915_active_create(); if (!active) { @@ -3279,28 +3348,29 @@ static long i915_perf_config_locked(struct i915_perf_stream *stream, goto err_config; } - /* - * If OA is bound to a specific context, emit the - * reconfiguration inline from that context. The update - * will then be ordered with respect to submission on that - * context. - * - * When set globally, we use a low priority kernel context, - * so it will effectively take effect when idle. - */ ret = emit_oa_config(stream, config, - ce, + stream->config_context, active, BIT(I915_OA_CONFIG_PART_GLOBAL) | BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); if (ret) goto err_active; + for (i = 0; i < stream->n_pinned_ctxs; i++) { + ret = emit_oa_config(stream, config, + stream->pinned_ctxs[i].ce, + active, + BIT(I915_OA_CONFIG_PART_PER_CONTEXT)); + if (ret) + goto err_active; + } + config = xchg(&stream->oa_config, config); } err_active: - i915_active_put(active); + if (active) + i915_active_put(active); err_config: i915_oa_config_put(config); @@ -3381,9 +3451,10 @@ static void i915_perf_destroy_locked(struct i915_perf_stream *stream) if (stream->ops->destroy) stream->ops->destroy(stream); - if (stream->ctx) - i915_gem_context_put(stream->ctx); + while (stream->n_ctxs--) + i915_gem_context_put(stream->ctxs[stream->n_ctxs]); + kfree(stream->ctxs); kfree(stream); } @@ -3458,25 +3529,12 @@ i915_perf_open_ioctl_locked(struct i915_perf *perf, struct perf_open_properties *props, struct drm_file *file) { - struct i915_gem_context *specific_ctx = NULL; + struct drm_i915_file_private *file_priv = file->driver_priv; struct i915_perf_stream *stream = NULL; unsigned long f_flags = 0; bool privileged_op = true; int stream_fd; - int ret; - - if (props->single_context) { - u32 ctx_handle = props->ctx_handle; - struct drm_i915_file_private *file_priv = file->driver_priv; - - specific_ctx = i915_gem_context_lookup(file_priv, ctx_handle); - if (!specific_ctx) { - DRM_DEBUG("Failed to look up context with ID %u for opening perf stream\n", - ctx_handle); - ret = -ENOENT; - goto err; - } - } + int i, ret; /* * On Haswell the OA unit supports clock gating off for a specific @@ -3497,17 +3555,16 @@ i915_perf_open_ioctl_locked(struct i915_perf *perf, * doesn't request global stream access (i.e. query based sampling * using MI_RECORD_PERF_COUNT. */ - if (IS_HASWELL(perf->i915) && specific_ctx) + if (IS_HASWELL(perf->i915) && props->n_ctx_handles > 0) privileged_op = false; - else if (IS_GEN(perf->i915, 12) && specific_ctx && + else if (IS_GEN(perf->i915, 12) && (props->n_ctx_handles > 0) && (props->sample_flags & SAMPLE_OA_REPORT) == 0) privileged_op = false; if (props->hold_preemption) { - if (!props->single_context) { + if (!props->n_ctx_handles) { DRM_DEBUG("preemption disable with no context\n"); - ret = -EINVAL; - goto err; + return -EINVAL; } privileged_op = true; } @@ -3528,23 +3585,43 @@ i915_perf_open_ioctl_locked(struct i915_perf *perf, if (privileged_op && i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) { DRM_DEBUG("Insufficient privileges to open i915 perf stream\n"); - ret = -EACCES; - goto err_ctx; + return -EACCES; } stream = kzalloc(sizeof(*stream), GFP_KERNEL); - if (!stream) { - ret = -ENOMEM; - goto err_ctx; + if (!stream) + return -ENOMEM; + + if (props->n_ctx_handles) { + gfp_t alloc_flags = GFP_KERNEL | __GFP_ZERO; + + stream->ctxs = kmalloc_array(props->n_ctx_handles, + sizeof(*stream->ctxs), + alloc_flags); + if (!stream->ctxs) + goto err_ctx; } stream->perf = perf; - stream->ctx = specific_ctx; stream->poll_oa_period = props->poll_oa_period; + for (i = 0; i < props->n_ctx_handles; i++) { + stream->ctxs[i] = i915_gem_context_lookup(file_priv, + props->ctx_handles[i]); + if (!stream->ctxs[i]) { + DRM_DEBUG("Failed to look up context with ID %u for opening perf stream\n", + props->ctx_handles[i]); + + ret = -ENOENT; + goto err_ctx; + } + + stream->n_ctxs++; + } + ret = i915_oa_stream_init(stream, param, props); if (ret) - goto err_alloc; + goto err_ctx; /* we avoid simply assigning stream->sample_flags = props->sample_flags * to have _stream_init check the combination of sample flags more @@ -3579,12 +3656,11 @@ i915_perf_open_ioctl_locked(struct i915_perf *perf, err_flags: if (stream->ops->destroy) stream->ops->destroy(stream); -err_alloc: - kfree(stream); err_ctx: - if (specific_ctx) - i915_gem_context_put(specific_ctx); -err: + while (stream->n_ctxs--) + i915_gem_context_put(stream->ctxs[stream->n_ctxs]); + kfree(stream->ctxs); + kfree(stream); return ret; } @@ -3616,7 +3692,7 @@ static int read_properties_unlocked(struct i915_perf *perf, { u64 __user *uprop = uprops; u32 i; - int ret; + int err; memset(props, 0, sizeof(struct perf_open_properties)); props->poll_oa_period = DEFAULT_POLL_PERIOD_NS; @@ -3650,23 +3726,36 @@ static int read_properties_unlocked(struct i915_perf *perf, u64 oa_period, oa_freq_hz; u64 id, value; - ret = get_user(id, uprop); - if (ret) - return ret; + err = get_user(id, uprop); + if (err) + goto error; - ret = get_user(value, uprop + 1); - if (ret) - return ret; + err = get_user(value, uprop + 1); + if (err) + goto error; if (id == 0 || id >= DRM_I915_PERF_PROP_MAX) { DRM_DEBUG("Unknown i915 perf property ID\n"); - return -EINVAL; + err = -EINVAL; + goto error; } switch ((enum drm_i915_perf_property_id)id) { case DRM_I915_PERF_PROP_CTX_HANDLE: - props->single_context = 1; - props->ctx_handle = value; + if (props->n_ctx_handles > 0) { + DRM_DEBUG("Context handle specified multiple times\n"); + err = -EINVAL; + goto error; + } + props->ctx_handles = + kmalloc_array(1, sizeof(*props->ctx_handles), + GFP_KERNEL); + if (!props->ctx_handles) { + err = -ENOMEM; + goto error; + } + props->ctx_handles[0] = value; + props->n_ctx_handles = 1; break; case DRM_I915_PERF_PROP_SAMPLE_OA: if (value) @@ -3675,7 +3764,8 @@ static int read_properties_unlocked(struct i915_perf *perf, case DRM_I915_PERF_PROP_OA_METRICS_SET: if (value == 0) { DRM_DEBUG("Unknown OA metric set ID\n"); - return -EINVAL; + err = -EINVAL; + goto error; } props->metrics_set = value; break; @@ -3683,12 +3773,14 @@ static int read_properties_unlocked(struct i915_perf *perf, if (value == 0 || value >= I915_OA_FORMAT_MAX) { DRM_DEBUG("Out-of-range OA report format %llu\n", value); - return -EINVAL; + err = -EINVAL; + goto error; } if (!perf->oa_formats[value].size) { DRM_DEBUG("Unsupported OA report format %llu\n", value); - return -EINVAL; + err = -EINVAL; + goto error; } props->oa_format = value; break; @@ -3696,7 +3788,8 @@ static int read_properties_unlocked(struct i915_perf *perf, if (value > OA_EXPONENT_MAX) { DRM_DEBUG("OA timer exponent too high (> %u)\n", OA_EXPONENT_MAX); - return -EINVAL; + err = -EINVAL; + goto error; } /* Theoretically we can program the OA unit to sample @@ -3725,7 +3818,8 @@ static int read_properties_unlocked(struct i915_perf *perf, !capable(CAP_SYS_ADMIN)) { DRM_DEBUG("OA exponent would exceed the max sampling frequency (sysctl dev.i915.oa_max_sample_rate) %uHz without root privileges\n", i915_oa_max_sample_rate); - return -EACCES; + err = -EACCES; + goto error; } props->oa_periodic = true; @@ -3741,13 +3835,14 @@ static int read_properties_unlocked(struct i915_perf *perf, u64_to_user_ptr(value), sizeof(user_sseu))) { DRM_DEBUG("Unable to copy global sseu parameter\n"); - return -EFAULT; + err = -EFAULT; + goto error; } - ret = get_sseu_config(&props->sseu, props->engine, &user_sseu); - if (ret) { + err = get_sseu_config(&props->sseu, props->engine, &user_sseu); + if (err) { DRM_DEBUG("Invalid SSEU configuration\n"); - return ret; + goto error; } props->has_sseu = true; break; @@ -3756,19 +3851,25 @@ static int read_properties_unlocked(struct i915_perf *perf, if (value < 100000 /* 100us */) { DRM_DEBUG("OA availability timer too small (%lluns < 100us)\n", value); - return -EINVAL; + err = -EINVAL; + goto error; } props->poll_oa_period = value; break; case DRM_I915_PERF_PROP_MAX: MISSING_CASE(id); - return -EINVAL; + err = -EINVAL; + goto error; } uprop += 2; } return 0; + +error: + kfree(props->ctx_handles); + return err; } /** @@ -3828,6 +3929,8 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data, ret = i915_perf_open_ioctl_locked(perf, param, &props, file); mutex_unlock(&perf->lock); + kfree(props.ctx_handles); + return ret; } diff --git a/drivers/gpu/drm/i915/i915_perf_types.h b/drivers/gpu/drm/i915/i915_perf_types.h index c0a78166b1d9..45faf89f7be4 100644 --- a/drivers/gpu/drm/i915/i915_perf_types.h +++ b/drivers/gpu/drm/i915/i915_perf_types.h @@ -160,10 +160,15 @@ struct i915_perf_stream { int sample_size; /** - * @ctx: %NULL if measuring system-wide across all contexts or a - * specific context that is being monitored. + * @n_ctxs: Number of contexts pinned for the recording. */ - struct i915_gem_context *ctx; + u32 n_ctxs; + + /** + * @ctxs: All to %NULL if measuring system-wide across all contexts or + * a list specific contexts that are being monitored. + */ + struct i915_gem_context **ctxs; /** * @enabled: Whether the stream is currently enabled, considering @@ -198,19 +203,31 @@ struct i915_perf_stream { struct llist_head oa_config_bos; /** - * @pinned_ctx: The OA context specific information. + * @pinned_ctxs: A array of logical context details needed for + * filtering and their associated pinned ID. */ - struct intel_context *pinned_ctx; + struct i915_perf_context_detail { + /** + * @ce: The OA context specific information. + */ + struct intel_context *ce; + + /** + * @id: The ids of the specific contexts. + */ + u32 id; + } *pinned_ctxs; /** - * @specific_ctx_id: The id of the specific context. + * @n_pinned_ctxs: Length of the @pinned_ctxs array, 0 if measuring + * system-wide across all contexts. */ - u32 specific_ctx_id; + u32 n_pinned_ctxs; /** - * @specific_ctx_id_mask: The mask used to masking specific_ctx_id bits. + * @ctx_id_mask: The mask used to masking specific_ctx_id bits. */ - u32 specific_ctx_id_mask; + u32 ctx_id_mask; /** * @poll_check_timer: High resolution timer that will periodically @@ -246,7 +263,7 @@ struct i915_perf_stream { struct { struct i915_vma *vma; u8 *vaddr; - u32 last_ctx_id; + bool last_ctx_match; int format; int format_size; int size_exponent; From patchwork Mon May 4 11:12:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lionel Landwerlin X-Patchwork-Id: 11525509 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F1F4692A for ; Mon, 4 May 2020 11:13:04 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DA5952073E for ; Mon, 4 May 2020 11:13:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DA5952073E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6778A89FE3; Mon, 4 May 2020 11:13:04 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0634189FCC for ; Mon, 4 May 2020 11:13:01 +0000 (UTC) IronPort-SDR: Ks4IUviNaI0Zbporwz/7OOd5P8wxp9bDCijz4ZRjvo/IxY5JYaFXXxymYfBvp6tOO6KIE70vGM 4LzAPjj2tfsA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2020 04:13:00 -0700 IronPort-SDR: GSVipyLYxMYNZf+7DRIjdiXfmEZ3Cv1CYoPpHbD8HpPxvFYyyoPUSsdK8aUrvyOSjV4bWd417w OWuaIqWRm1iw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,351,1583222400"; d="scan'208";a="248188645" Received: from efilatov-mobl.ger.corp.intel.com (HELO delly.ger.corp.intel.com) ([10.252.56.163]) by orsmga007.jf.intel.com with ESMTP; 04 May 2020 04:12:59 -0700 From: Lionel Landwerlin To: intel-gfx@lists.freedesktop.org Date: Mon, 4 May 2020 14:12:49 +0300 Message-Id: <20200504111249.1367096-5-lionel.g.landwerlin@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200504111249.1367096-1-lionel.g.landwerlin@intel.com> References: <20200504111249.1367096-1-lionel.g.landwerlin@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v12 4/4] drm/i915/perf: enable filtering on multiple contexts X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add 2 new properties to the i915-perf open ioctl to specify an array of GEM context handles as well as the length of the array. This can be used by drivers using multiple GEM contexts to implement a single GL context. Signed-off-by: Lionel Landwerlin Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4464 Reviewed-by: Chris Wilson --- drivers/gpu/drm/i915/i915_perf.c | 58 ++++++++++++++++++++++++++++++-- include/uapi/drm/i915_drm.h | 21 ++++++++++++ 2 files changed, 76 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index 66d52ee4767b..cc7ad38b3294 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -3691,7 +3691,8 @@ static int read_properties_unlocked(struct i915_perf *perf, struct perf_open_properties *props) { u64 __user *uprop = uprops; - u32 i; + u32 __user *uctx_handles = NULL; + u32 i, n_uctx_handles = 0; int err; memset(props, 0, sizeof(struct perf_open_properties)); @@ -3742,7 +3743,7 @@ static int read_properties_unlocked(struct i915_perf *perf, switch ((enum drm_i915_perf_property_id)id) { case DRM_I915_PERF_PROP_CTX_HANDLE: - if (props->n_ctx_handles > 0) { + if (props->n_ctx_handles > 0 || n_uctx_handles > 0) { DRM_DEBUG("Context handle specified multiple times\n"); err = -EINVAL; goto error; @@ -3856,6 +3857,38 @@ static int read_properties_unlocked(struct i915_perf *perf, } props->poll_oa_period = value; break; + case DRM_I915_PERF_PROP_CTX_HANDLE_ARRAY: + /* HSW can only filter in HW and only on a single + * context. + */ + if (IS_HASWELL(perf->i915)) { + DRM_DEBUG("Multi context filter not supported on HSW\n"); + err = -ENODEV; + goto error; + } + uctx_handles = u64_to_user_ptr(value); + break; + case DRM_I915_PERF_PROP_CTX_HANDLE_ARRAY_LENGTH: + if (IS_HASWELL(perf->i915)) { + DRM_DEBUG("Multi context filter not supported on HSW\n"); + err = -ENODEV; + goto error; + } + if (props->n_ctx_handles > 0 || n_uctx_handles > 0) { + DRM_DEBUG("Context handle specified multiple times\n"); + err = -EINVAL; + goto error; + } + props->ctx_handles = + kmalloc_array(value, + sizeof(*props->ctx_handles), + GFP_KERNEL); + if (!props->ctx_handles) { + err = -ENOMEM; + goto error; + } + n_uctx_handles = value; + break; case DRM_I915_PERF_PROP_MAX: MISSING_CASE(id); err = -EINVAL; @@ -3865,6 +3898,21 @@ static int read_properties_unlocked(struct i915_perf *perf, uprop += 2; } + if (n_uctx_handles > 0 && props->n_ctx_handles > 0) { + DRM_DEBUG("Context handle specified multiple times\n"); + err = -EINVAL; + goto error; + } + + for (i = 0; i < n_uctx_handles; i++) { + err = get_user(props->ctx_handles[i], uctx_handles); + if (err) + goto error; + + uctx_handles++; + props->n_ctx_handles++; + } + return 0; error: @@ -4648,8 +4696,12 @@ int i915_perf_ioctl_version(void) * * 5: Add DRM_I915_PERF_PROP_POLL_OA_PERIOD parameter that controls the * interval for the hrtimer used to check for OA data. + * + * 6: Add DRM_I915_PERF_PROP_CTX_HANDLE_ARRAY & + * DRM_I915_PERF_PROP_CTX_HANDLE_ARRAY_LENGTH to allow an + * application monitor/pin multiple contexts. */ - return 5; + return 6; } #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 14b67cd6b54b..f80e7932d728 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -1993,6 +1993,27 @@ enum drm_i915_perf_property_id { */ DRM_I915_PERF_PROP_POLL_OA_PERIOD, + /** + * Specifies an array of u32 GEM context handles to filter reports + * with. + * + * Using this parameter is incompatible with using + * DRM_I915_PERF_PROP_CTX_HANDLE. + * + * This property is available in perf revision 6. + */ + DRM_I915_PERF_PROP_CTX_HANDLE_ARRAY, + + /** + * Specifies the length of the array specified with + * DRM_I915_PERF_PROP_CTX_HANDLE_ARRAY. + * + * The length must be in the range [1, 4]. + * + * This property is available in perf revision 6. + */ + DRM_I915_PERF_PROP_CTX_HANDLE_ARRAY_LENGTH, + DRM_I915_PERF_PROP_MAX /* non-ABI */ };