From patchwork Wed Feb 3 18:39:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robert Bragg X-Patchwork-Id: 8206961 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 84E96BEEED for ; Wed, 3 Feb 2016 18:39:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2D81320253 for ; Wed, 3 Feb 2016 18:39:30 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 005582034E for ; Wed, 3 Feb 2016 18:39:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 164C36E7EE; Wed, 3 Feb 2016 10:39:24 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-f68.google.com (mail-wm0-f68.google.com [74.125.82.68]) by gabe.freedesktop.org (Postfix) with ESMTPS id EB0756E7E6; Wed, 3 Feb 2016 10:39:21 -0800 (PST) Received: by mail-wm0-f68.google.com with SMTP id l66so8646200wml.2; Wed, 03 Feb 2016 10:39:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type:content-transfer-encoding; bh=SzQN06HvAZ6k9ATG6RIpF+ulcssVx/PjaepVYOIpJvc=; b=uT8/rRWmIqdUG3/ILG34OncPILjEGDQi1VUbH26SBi27M/YppGCHhAVIv+E0cxUkwG loJZloYs7zbEe9fG9SEwZ8qiUEsASsSBYraHgKnDbFgLwx26o0MxxoGFYWFTExRTNiq4 OziSbu4JxxVrXLLZdrBBvihABZ7OdugP9avcvLu5a/EOAUzYSHysxYCKRQxP2cFj9MQY sdYW/iUUAkNFiragqx/KEviH66OgjdFST9TVWSYPPNjZWSxcfbGIWzJY4yUaA7KTWAD0 VHWKVUAA2D65LmReGhRNiqm5N7NTdKuG18Wk+Aoj7nOFgOlEb9/GCxhhKmcGGKe3DMO5 twyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-type :content-transfer-encoding; bh=SzQN06HvAZ6k9ATG6RIpF+ulcssVx/PjaepVYOIpJvc=; b=Dx4S5UjTFbq6dnF+G+PqlIlgDgk2w/q0rVru21Ww7fDe2GGGFc3fbAoEGMtd4vyEra b/VE6LCHEd5h7j/upSo+831BWHrCAc25uOrSfJAX1uJrBYYqNd/5hLMq6Mg+0PgoOhwH kxH7gp8BIGl7hswVXUNisTRJtp4+AM43j86GInNAhot+j+pok0dYK9Kfcn0ve/RBjHGf bVdtqrZbFIfkOJgvBx22rR0Vb5uwq3DCYw+gxdWjurGcZSl70CDNqWufel+zmbCW0Uo/ Jiqatc/coWFE1jgUrQcLMOITZ8FNUW76hrUwmydIJuBR0QcEBpef0Tx2yFHMtwFNK92P zybw== X-Gm-Message-State: AG10YOQsEVuP+l/M1QdwaBQYiPpJysaaJHgl6LvUBKP/4GQ39w5hCuh7mGZoc5mJXNcuMg== X-Received: by 10.28.221.215 with SMTP id u206mr5858179wmg.58.1454524760697; Wed, 03 Feb 2016 10:39:20 -0800 (PST) Received: from sixbynine.org ([83.217.123.106]) by smtp.gmail.com with ESMTPSA id lw7sm7737658wjb.19.2016.02.03.10.39.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 03 Feb 2016 10:39:19 -0800 (PST) From: Robert Bragg To: intel-gfx@lists.freedesktop.org Date: Wed, 3 Feb 2016 18:39:09 +0000 Message-Id: <1454524753-13218-5-git-send-email-robert@sixbynine.org> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1454524753-13218-1-git-send-email-robert@sixbynine.org> References: <1454524753-13218-1-git-send-email-robert@sixbynine.org> MIME-Version: 1.0 Cc: David Airlie , dri-devel@lists.freedesktop.org, Sourab Gupta , Daniel Vetter Subject: [Intel-gfx] [PATCH 4/8] drm/i915: Add i915 perf event for Haswell OA unit X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Gen graphics hardware can be set up to periodically write snapshots of performance counters into a circular buffer via its Observation Architecture and this patch exposes that capability to userspace via the i915 perf interface. Cc: Chris Wilson Signed-off-by: Robert Bragg Signed-off-by: Zhenyu Wang --- drivers/gpu/drm/i915/i915_drv.h | 51 ++- drivers/gpu/drm/i915/i915_gem_context.c | 23 +- drivers/gpu/drm/i915/i915_oa_hsw.c | 166 +++---- drivers/gpu/drm/i915/i915_perf.c | 783 +++++++++++++++++++++++++++++++- drivers/gpu/drm/i915/i915_reg.h | 338 ++++++++++++++ include/uapi/drm/i915_drm.h | 56 ++- 6 files changed, 1317 insertions(+), 100 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 86da0ae..e4b9399 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1732,8 +1732,13 @@ struct intel_wm_config { bool sprites_scaled; }; +struct i915_oa_format { + u32 format; + int size; +}; + struct i915_oa_reg { - u32 addr; + i915_reg_t addr; u32 value; }; @@ -1749,6 +1754,7 @@ struct i915_perf_stream { struct list_head link; u32 sample_flags; + int sample_size; struct intel_context *ctx; bool enabled; @@ -1812,6 +1818,20 @@ struct i915_perf_stream { void (*destroy)(struct i915_perf_stream *stream); }; +struct i915_oa_ops { + void (*init_oa_buffer)(struct drm_i915_private *dev_priv); + int (*enable_metric_set)(struct drm_i915_private *dev_priv); + void (*disable_metric_set)(struct drm_i915_private *dev_priv); + void (*oa_enable)(struct drm_i915_private *dev_priv); + void (*oa_disable)(struct drm_i915_private *dev_priv); + void (*update_oacontrol)(struct drm_i915_private *dev_priv); + void (*update_hw_ctx_id_locked)(struct drm_i915_private *dev_priv, + u32 ctx_id); + int (*read)(struct i915_perf_stream *stream, + struct i915_perf_read_state *read_state); + bool (*oa_buffer_is_empty)(struct drm_i915_private *dev_priv); +}; + struct drm_i915_private { struct drm_device *dev; struct kmem_cache *objects; @@ -2065,16 +2085,43 @@ struct drm_i915_private { struct { bool initialized; + struct mutex lock; struct list_head streams; + spinlock_t hook_lock; + struct { + struct i915_perf_stream *exclusive_stream; + + u32 specific_ctx_id; + + struct hrtimer poll_check_timer; + wait_queue_head_t poll_wq; + + bool periodic; + u32 period_exponent; + u32 metrics_set; const struct i915_oa_reg *mux_regs; int mux_regs_len; const struct i915_oa_reg *b_counter_regs; int b_counter_regs_len; + + struct { + struct drm_i915_gem_object *obj; + u32 gtt_offset; + u8 *addr; + u32 head; + u32 tail; + int format; + int format_size; + } oa_buffer; + + struct i915_oa_ops ops; + const struct i915_oa_format *oa_formats; + int n_builtin_sets; } oa; } perf; @@ -3348,6 +3395,8 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data, int i915_perf_open_ioctl(struct drm_device *dev, void *data, struct drm_file *file); +void i915_oa_context_pin_notify(struct drm_i915_private *dev_priv, + struct intel_context *context); /* i915_gem_evict.c */ int __must_check i915_gem_evict_something(struct drm_device *dev, diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 83a097c..071cfa8 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -133,6 +133,23 @@ static int get_context_size(struct drm_device *dev) return ret; } +static int i915_gem_context_pin_state(struct drm_device *dev, + struct intel_context *ctx) +{ + int ret; + + BUG_ON(!mutex_is_locked(&dev->struct_mutex)); + + ret = i915_gem_obj_ggtt_pin(ctx->legacy_hw_ctx.rcs_state, + get_context_alignment(dev), 0); + if (ret) + return ret; + + i915_oa_context_pin_notify(dev->dev_private, ctx); + + return 0; +} + static void i915_gem_context_clean(struct intel_context *ctx) { struct i915_hw_ppgtt *ppgtt = ctx->ppgtt; @@ -287,8 +304,7 @@ i915_gem_create_context(struct drm_device *dev, * be available. To avoid this we always pin the default * context. */ - ret = i915_gem_obj_ggtt_pin(ctx->legacy_hw_ctx.rcs_state, - get_context_alignment(dev), 0); + ret = i915_gem_context_pin_state(dev, ctx); if (ret) { DRM_DEBUG_DRIVER("Couldn't pin %d\n", ret); goto err_destroy; @@ -665,8 +681,7 @@ static int do_switch(struct drm_i915_gem_request *req) /* Trying to pin first makes error handling easier. */ if (ring == &dev_priv->ring[RCS]) { - ret = i915_gem_obj_ggtt_pin(to->legacy_hw_ctx.rcs_state, - get_context_alignment(ring->dev), 0); + ret = i915_gem_context_pin_state(ring->dev, to); if (ret) return ret; } diff --git a/drivers/gpu/drm/i915/i915_oa_hsw.c b/drivers/gpu/drm/i915/i915_oa_hsw.c index c05745e..5472aa0 100644 --- a/drivers/gpu/drm/i915/i915_oa_hsw.c +++ b/drivers/gpu/drm/i915/i915_oa_hsw.c @@ -27,106 +27,106 @@ #include "i915_drv.h" enum metric_set_id { - METRIC_SET_ID_RENDER_BASIC = 1, + METRIC_SET_ID_RENDER_BASIC = 1, }; int i915_oa_n_builtin_metric_sets_hsw = 1; static const struct i915_oa_reg b_counter_config_render_basic[] = { - { 0x2724, 0x00800000 }, - { 0x2720, 0x00000000 }, - { 0x2714, 0x00800000 }, - { 0x2710, 0x00000000 }, + { _MMIO(0x2724), 0x00800000 }, + { _MMIO(0x2720), 0x00000000 }, + { _MMIO(0x2714), 0x00800000 }, + { _MMIO(0x2710), 0x00000000 }, }; static const struct i915_oa_reg mux_config_render_basic[] = { - { 0x253A4, 0x01600000 }, - { 0x25440, 0x00100000 }, - { 0x25128, 0x00000000 }, - { 0x2691C, 0x00000800 }, - { 0x26AA0, 0x01500000 }, - { 0x26B9C, 0x00006000 }, - { 0x2791C, 0x00000800 }, - { 0x27AA0, 0x01500000 }, - { 0x27B9C, 0x00006000 }, - { 0x2641C, 0x00000400 }, - { 0x25380, 0x00000010 }, - { 0x2538C, 0x00000000 }, - { 0x25384, 0x0800AAAA }, - { 0x25400, 0x00000004 }, - { 0x2540C, 0x06029000 }, - { 0x25410, 0x00000002 }, - { 0x25404, 0x5C30FFFF }, - { 0x25100, 0x00000016 }, - { 0x25110, 0x00000400 }, - { 0x25104, 0x00000000 }, - { 0x26804, 0x00001211 }, - { 0x26884, 0x00000100 }, - { 0x26900, 0x00000002 }, - { 0x26908, 0x00700000 }, - { 0x26904, 0x00000000 }, - { 0x26984, 0x00001022 }, - { 0x26A04, 0x00000011 }, - { 0x26A80, 0x00000006 }, - { 0x26A88, 0x00000C02 }, - { 0x26A84, 0x00000000 }, - { 0x26B04, 0x00001000 }, - { 0x26B80, 0x00000002 }, - { 0x26B8C, 0x00000007 }, - { 0x26B84, 0x00000000 }, - { 0x27804, 0x00004844 }, - { 0x27884, 0x00000400 }, - { 0x27900, 0x00000002 }, - { 0x27908, 0x0E000000 }, - { 0x27904, 0x00000000 }, - { 0x27984, 0x00004088 }, - { 0x27A04, 0x00000044 }, - { 0x27A80, 0x00000006 }, - { 0x27A88, 0x00018040 }, - { 0x27A84, 0x00000000 }, - { 0x27B04, 0x00004000 }, - { 0x27B80, 0x00000002 }, - { 0x27B8C, 0x000000E0 }, - { 0x27B84, 0x00000000 }, - { 0x26104, 0x00002222 }, - { 0x26184, 0x0C006666 }, - { 0x26284, 0x04000000 }, - { 0x26304, 0x04000000 }, - { 0x26400, 0x00000002 }, - { 0x26410, 0x000000A0 }, - { 0x26404, 0x00000000 }, - { 0x25420, 0x04108020 }, - { 0x25424, 0x1284A420 }, - { 0x2541C, 0x00000000 }, - { 0x25428, 0x00042049 }, + { _MMIO(0x253A4), 0x01600000 }, + { _MMIO(0x25440), 0x00100000 }, + { _MMIO(0x25128), 0x00000000 }, + { _MMIO(0x2691C), 0x00000800 }, + { _MMIO(0x26AA0), 0x01500000 }, + { _MMIO(0x26B9C), 0x00006000 }, + { _MMIO(0x2791C), 0x00000800 }, + { _MMIO(0x27AA0), 0x01500000 }, + { _MMIO(0x27B9C), 0x00006000 }, + { _MMIO(0x2641C), 0x00000400 }, + { _MMIO(0x25380), 0x00000010 }, + { _MMIO(0x2538C), 0x00000000 }, + { _MMIO(0x25384), 0x0800AAAA }, + { _MMIO(0x25400), 0x00000004 }, + { _MMIO(0x2540C), 0x06029000 }, + { _MMIO(0x25410), 0x00000002 }, + { _MMIO(0x25404), 0x5C30FFFF }, + { _MMIO(0x25100), 0x00000016 }, + { _MMIO(0x25110), 0x00000400 }, + { _MMIO(0x25104), 0x00000000 }, + { _MMIO(0x26804), 0x00001211 }, + { _MMIO(0x26884), 0x00000100 }, + { _MMIO(0x26900), 0x00000002 }, + { _MMIO(0x26908), 0x00700000 }, + { _MMIO(0x26904), 0x00000000 }, + { _MMIO(0x26984), 0x00001022 }, + { _MMIO(0x26A04), 0x00000011 }, + { _MMIO(0x26A80), 0x00000006 }, + { _MMIO(0x26A88), 0x00000C02 }, + { _MMIO(0x26A84), 0x00000000 }, + { _MMIO(0x26B04), 0x00001000 }, + { _MMIO(0x26B80), 0x00000002 }, + { _MMIO(0x26B8C), 0x00000007 }, + { _MMIO(0x26B84), 0x00000000 }, + { _MMIO(0x27804), 0x00004844 }, + { _MMIO(0x27884), 0x00000400 }, + { _MMIO(0x27900), 0x00000002 }, + { _MMIO(0x27908), 0x0E000000 }, + { _MMIO(0x27904), 0x00000000 }, + { _MMIO(0x27984), 0x00004088 }, + { _MMIO(0x27A04), 0x00000044 }, + { _MMIO(0x27A80), 0x00000006 }, + { _MMIO(0x27A88), 0x00018040 }, + { _MMIO(0x27A84), 0x00000000 }, + { _MMIO(0x27B04), 0x00004000 }, + { _MMIO(0x27B80), 0x00000002 }, + { _MMIO(0x27B8C), 0x000000E0 }, + { _MMIO(0x27B84), 0x00000000 }, + { _MMIO(0x26104), 0x00002222 }, + { _MMIO(0x26184), 0x0C006666 }, + { _MMIO(0x26284), 0x04000000 }, + { _MMIO(0x26304), 0x04000000 }, + { _MMIO(0x26400), 0x00000002 }, + { _MMIO(0x26410), 0x000000A0 }, + { _MMIO(0x26404), 0x00000000 }, + { _MMIO(0x25420), 0x04108020 }, + { _MMIO(0x25424), 0x1284A420 }, + { _MMIO(0x2541C), 0x00000000 }, + { _MMIO(0x25428), 0x00042049 }, }; static int select_render_basic_config(struct drm_i915_private *dev_priv) { - dev_priv->perf.oa.mux_regs = - mux_config_render_basic; - dev_priv->perf.oa.mux_regs_len = - ARRAY_SIZE(mux_config_render_basic); + dev_priv->perf.oa.mux_regs = + mux_config_render_basic; + dev_priv->perf.oa.mux_regs_len = + ARRAY_SIZE(mux_config_render_basic); - dev_priv->perf.oa.b_counter_regs = - b_counter_config_render_basic; - dev_priv->perf.oa.b_counter_regs_len = - ARRAY_SIZE(b_counter_config_render_basic); + dev_priv->perf.oa.b_counter_regs = + b_counter_config_render_basic; + dev_priv->perf.oa.b_counter_regs_len = + ARRAY_SIZE(b_counter_config_render_basic); - return 0; + return 0; } int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv) { - dev_priv->perf.oa.mux_regs = NULL; - dev_priv->perf.oa.mux_regs_len = 0; - dev_priv->perf.oa.b_counter_regs = NULL; - dev_priv->perf.oa.b_counter_regs_len = 0; + dev_priv->perf.oa.mux_regs = NULL; + dev_priv->perf.oa.mux_regs_len = 0; + dev_priv->perf.oa.b_counter_regs = NULL; + dev_priv->perf.oa.b_counter_regs_len = 0; - switch (dev_priv->perf.oa.metrics_set) { - case METRIC_SET_ID_RENDER_BASIC: - return select_render_basic_config(dev_priv); - default: - return -ENODEV; - } + switch (dev_priv->perf.oa.metrics_set) { + case METRIC_SET_ID_RENDER_BASIC: + return select_render_basic_config(dev_priv); + default: + return -ENODEV; + } } diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index 9af9e52..b54f719 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -25,14 +25,692 @@ #include #include "i915_drv.h" +#include "intel_ringbuffer.h" +#include "intel_lrc.h" +#include "i915_oa_hsw.h" + +/* Must be a power of two */ +#define OA_BUFFER_SIZE SZ_16M +#define OA_TAKEN(tail, head) ((tail - head) & (OA_BUFFER_SIZE - 1)) + +/* frequency for forwarding samples from OA to perf buffer */ +#define POLL_FREQUENCY 200 +#define POLL_PERIOD max_t(u64, 10000, NSEC_PER_SEC / POLL_FREQUENCY) + +#define OA_EXPONENT_MAX 0x3f + +static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = { + [I915_OA_FORMAT_A13] = { 0, 64 }, + [I915_OA_FORMAT_A29] = { 1, 128 }, + [I915_OA_FORMAT_A13_B8_C8] = { 2, 128 }, + /* A29_B8_C8 Disallowed as 192 bytes doesn't factor into buffer size */ + [I915_OA_FORMAT_B4_C8] = { 4, 64 }, + [I915_OA_FORMAT_A45_B8_C8] = { 5, 256 }, + [I915_OA_FORMAT_B4_C8_A16] = { 6, 128 }, + [I915_OA_FORMAT_C4_B8] = { 7, 64 }, +}; + +#define SAMPLE_OA_REPORT (1<<0) struct perf_open_properties { u32 sample_flags; u64 single_context:1; u64 ctx_handle; + + /* OA sampling state */ + int metrics_set; + int oa_format; + bool oa_periodic; + u32 oa_period_exponent; }; +static bool gen7_oa_buffer_is_empty(struct drm_i915_private *dev_priv) +{ + u32 oastatus2 = I915_READ(GEN7_OASTATUS2); + u32 oastatus1 = I915_READ(GEN7_OASTATUS1); + u32 head = oastatus2 & GEN7_OASTATUS2_HEAD_MASK; + u32 tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK; + + return OA_TAKEN(tail, head) == 0; +} + +/** + * Appends a status record to a userspace read() buffer. + * + * Returns 0 if no room left in userspace read() buffer, 1 if status + * record successfully appended or -EFAULT for copy_to_user failure. + */ +static int append_oa_status(struct i915_perf_stream *stream, + struct i915_perf_read_state *read_state, + enum drm_i915_perf_record_type type) +{ + struct drm_i915_perf_record_header header = { type, 0, sizeof(header) }; + + if ((read_state->count - read_state->read) < header.size) + return 0; + + if (copy_to_user(read_state->buf, &header, sizeof(header))) + return -EFAULT; + + read_state->buf += sizeof(header); + read_state->read += header.size; + + return 1; +} + +/** + * Copies single OA report into userspace read() buffer. + * + * Returns 0 if no room left in userspace read() buffer, 1 if sample + * record successfully appended or -EFAULT for copy_to_user failure. + */ +static int append_oa_sample(struct i915_perf_stream *stream, + struct i915_perf_read_state *read_state, + const u8 *report) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + int report_size = dev_priv->perf.oa.oa_buffer.format_size; + struct drm_i915_perf_record_header header; + u32 sample_flags = stream->sample_flags; + char __user *buf = read_state->buf; + + header.type = DRM_I915_PERF_RECORD_SAMPLE; + header.pad = 0; + header.size = stream->sample_size; + + if ((read_state->count - read_state->read) < header.size) + return 0; + + if (copy_to_user(buf, &header, sizeof(header))) + return -EFAULT; + buf += sizeof(header); + + if (sample_flags & SAMPLE_OA_REPORT) { + if (copy_to_user(buf, report, report_size)) + return -EFAULT; + buf += report_size; + } + + read_state->buf = buf; + read_state->read += header.size; + + return 1; +} + +/** + * Copies all buffered OA reports into userspace read() buffer. + * @head_ptr: (inout): the head pointer before and after appending + * + * Returns the number of records appended (may be zero if the read() + * buffer is already full), or -EFAULT for failures while copying. + * + * Consistent with i915_perf_stream::read(), this function should only + * return an -EFAULT from copying if no records have been copied and + * otherwise reports a short read. + */ +static int gen7_append_oa_reports(struct i915_perf_stream *stream, + struct i915_perf_read_state *read_state, + u32 *head_ptr, + u32 tail) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + int report_size = dev_priv->perf.oa.oa_buffer.format_size; + u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.addr; + u32 mask = (OA_BUFFER_SIZE - 1); + u32 head; + u8 *report; + u32 taken; + int ret = 0; + int n_records = 0; + + head = *head_ptr - dev_priv->perf.oa.oa_buffer.gtt_offset; + tail -= dev_priv->perf.oa.oa_buffer.gtt_offset; + + /* Note: the gpu doesn't wrap the tail according to the OA buffer size + * so when we need to make sure our head/tail values are in-bounds we + * use the above mask. + */ + + while ((taken = OA_TAKEN(tail, head))) { + /* The tail increases in 64 byte increments, not in + * format_size steps. + */ + if (taken < report_size) + break; + + report = oa_buf_base + (head & mask); + + if (dev_priv->perf.oa.exclusive_stream->enabled) { + ret = append_oa_sample(stream, read_state, report); + if (n_records && ret == -EFAULT) + ret = 0; + if (ret > 0) + n_records += ret; + else + break; + } + + /* We don't want to update head if there was a problem + * appending the sample. + */ + head += report_size; + } + + *head_ptr = dev_priv->perf.oa.oa_buffer.gtt_offset + head; + + return n_records; +} + +/** + * Check OA status registers, appending status records if necessary + * and copy as many buffered OA reports to userspace as possible. + * + * NB: this function should only return an -EFAULT from copying if no + * records have been copied and otherwise report a short read. + * + * Returns the number of records appended (may be zero if no room in + * read() buffer), or -EFAULT (the only errno we expect to see here) + */ +static int gen7_oa_read(struct i915_perf_stream *stream, + struct i915_perf_read_state *read_state) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + u32 oastatus2; + u32 oastatus1; + u32 head; + u32 tail; + int ret = 1; /* > 0 implies a prior successful append */ + int n_records = 0; + + WARN_ON(!dev_priv->perf.oa.oa_buffer.addr); + + oastatus2 = I915_READ(GEN7_OASTATUS2); + oastatus1 = I915_READ(GEN7_OASTATUS1); + + head = oastatus2 & GEN7_OASTATUS2_HEAD_MASK; + tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK; + + if (unlikely(oastatus1 & (GEN7_OASTATUS1_OABUFFER_OVERFLOW | + GEN7_OASTATUS1_REPORT_LOST))) { + + if (oastatus1 & GEN7_OASTATUS1_OABUFFER_OVERFLOW) { + ret = append_oa_status(stream, read_state, + DRM_I915_PERF_RECORD_OA_BUFFER_OVERFLOW); + if (ret <= 0) + return ret; + n_records += ret; + oastatus1 &= ~GEN7_OASTATUS1_OABUFFER_OVERFLOW; + } + + if (oastatus1 & GEN7_OASTATUS1_REPORT_LOST) { + ret = append_oa_status(stream, read_state, + DRM_I915_PERF_RECORD_OA_REPORT_LOST); + if (n_records && ret == -EFAULT) + ret = 0; + else if (ret < 0) + return ret; + else if (ret > 0) { + n_records += ret; + oastatus1 &= ~GEN7_OASTATUS1_REPORT_LOST; + } + } + + /* XXX: This register contains the OA tail pointer + * which may be updated asynchronously while periodic + * sampling is enabled so it's racy to + * read-modify-write like we are here (esp after the + * append/copy_to_user work above.) */ +#warning "WIP: check the correct way to clear OASTATUS1 bits" + I915_WRITE(GEN7_OASTATUS1, oastatus1); + } + + /* If there is still buffer space */ + if (ret) { + ret = gen7_append_oa_reports(stream, read_state, &head, tail); + if (n_records && ret == -EFAULT) + ret = 0; + else if (ret < 0) + return ret; + else if (ret > 0) { + n_records += ret; + + I915_WRITE(GEN7_OASTATUS2, + ((head & GEN7_OASTATUS2_HEAD_MASK) | + OA_MEM_SELECT_GGTT)); + } + } + + return n_records; +} + +static bool i915_oa_can_read(struct i915_perf_stream *stream) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + + return !dev_priv->perf.oa.ops.oa_buffer_is_empty(dev_priv); +} + +static int i915_oa_wait_unlocked(struct i915_perf_stream *stream) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + + /* Note: the oa_buffer_is_empty() condition is ok to run unlocked as it + * just performs mmio reads of the OA buffer head + tail pointers and + * it's assumed we're handling some operation that implies the stream + * can't be destroyed until completion (such as a read()) that ensures + * the device + OA buffer can't disappear + */ + return wait_event_interruptible(dev_priv->perf.oa.poll_wq, + !dev_priv->perf.oa.ops.oa_buffer_is_empty(dev_priv)); +} + +static void i915_oa_poll_wait(struct i915_perf_stream *stream, + struct file *file, + poll_table *wait) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + + poll_wait(file, &dev_priv->perf.oa.poll_wq, wait); +} + +static int i915_oa_read(struct i915_perf_stream *stream, + struct i915_perf_read_state *read_state) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + + return dev_priv->perf.oa.ops.read(stream, read_state); +} + +static void +free_oa_buffer(struct drm_i915_private *i915) +{ + mutex_lock(&i915->dev->struct_mutex); + + vunmap(i915->perf.oa.oa_buffer.addr); + i915_gem_object_ggtt_unpin(i915->perf.oa.oa_buffer.obj); + drm_gem_object_unreference(&i915->perf.oa.oa_buffer.obj->base); + + i915->perf.oa.oa_buffer.obj = NULL; + i915->perf.oa.oa_buffer.gtt_offset = 0; + i915->perf.oa.oa_buffer.addr = NULL; + + mutex_unlock(&i915->dev->struct_mutex); +} + +static void i915_oa_stream_destroy(struct i915_perf_stream *stream) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + + BUG_ON(stream != dev_priv->perf.oa.exclusive_stream); + + dev_priv->perf.oa.ops.disable_metric_set(dev_priv); + + free_oa_buffer(dev_priv); + + intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); + intel_runtime_pm_put(dev_priv); + + dev_priv->perf.oa.exclusive_stream = NULL; +} + +static void *vmap_oa_buffer(struct drm_i915_gem_object *obj) +{ + int i; + void *addr = NULL; + struct sg_page_iter sg_iter; + struct page **pages; + + pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages)); + if (pages == NULL) { + DRM_DEBUG_DRIVER("Failed to get space for pages\n"); + goto finish; + } + + i = 0; + for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) { + pages[i] = sg_page_iter_page(&sg_iter); + i++; + } + + addr = vmap(pages, i, 0, PAGE_KERNEL); + if (addr == NULL) { + DRM_DEBUG_DRIVER("Failed to vmap pages\n"); + goto finish; + } + +finish: + if (pages) + drm_free_large(pages); + return addr; +} + +static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv) +{ + /* Pre-DevBDW: OABUFFER must be set with counters off, + * before OASTATUS1, but after OASTATUS2 + */ + I915_WRITE(GEN7_OASTATUS2, dev_priv->perf.oa.oa_buffer.gtt_offset | + OA_MEM_SELECT_GGTT); /* head */ + I915_WRITE(GEN7_OABUFFER, dev_priv->perf.oa.oa_buffer.gtt_offset); + I915_WRITE(GEN7_OASTATUS1, dev_priv->perf.oa.oa_buffer.gtt_offset | + OABUFFER_SIZE_16M); /* tail */ +} + +static int alloc_oa_buffer(struct drm_i915_private *dev_priv) +{ + struct drm_i915_gem_object *bo; + int ret; + + BUG_ON(dev_priv->perf.oa.oa_buffer.obj); + + ret = i915_mutex_lock_interruptible(dev_priv->dev); + if (ret) + return ret; + + bo = i915_gem_alloc_object(dev_priv->dev, OA_BUFFER_SIZE); + if (bo == NULL) { + DRM_ERROR("Failed to allocate OA buffer\n"); + ret = -ENOMEM; + goto unlock; + } + dev_priv->perf.oa.oa_buffer.obj = bo; + + ret = i915_gem_object_set_cache_level(bo, I915_CACHE_LLC); + if (ret) + goto err_unref; + + /* PreHSW required 512K alignment, HSW requires 16M */ + ret = i915_gem_obj_ggtt_pin(bo, SZ_16M, 0); + if (ret) + goto err_unref; + + dev_priv->perf.oa.oa_buffer.gtt_offset = i915_gem_obj_ggtt_offset(bo); + dev_priv->perf.oa.oa_buffer.addr = vmap_oa_buffer(bo); + + dev_priv->perf.oa.ops.init_oa_buffer(dev_priv); + + DRM_DEBUG_DRIVER("OA Buffer initialized, gtt offset = 0x%x, vaddr = %p", + dev_priv->perf.oa.oa_buffer.gtt_offset, + dev_priv->perf.oa.oa_buffer.addr); + + goto unlock; + +err_unref: + drm_gem_object_unreference(&bo->base); + +unlock: + mutex_unlock(&dev_priv->dev->struct_mutex); + return ret; +} + +static void config_oa_regs(struct drm_i915_private *dev_priv, + const struct i915_oa_reg *regs, + int n_regs) +{ + int i; + + for (i = 0; i < n_regs; i++) { + const struct i915_oa_reg *reg = regs + i; + + I915_WRITE(reg->addr, reg->value); + } +} + +static int hsw_enable_metric_set(struct drm_i915_private *dev_priv) +{ + int ret = i915_oa_select_metric_set_hsw(dev_priv); + + if (ret) + return ret; + + I915_WRITE(GDT_CHICKEN_BITS, GT_NOA_ENABLE); + + /* PRM: + * + * OA unit is using “crclk” for its functionality. When trunk + * level clock gating takes place, OA clock would be gated, + * unable to count the events from non-render clock domain. + * Render clock gating must be disabled when OA is enabled to + * count the events from non-render domain. Unit level clock + * gating for RCS should also be disabled. + */ + I915_WRITE(GEN7_MISCCPCTL, (I915_READ(GEN7_MISCCPCTL) & + ~GEN7_DOP_CLOCK_GATE_ENABLE)); + I915_WRITE(GEN6_UCGCTL1, (I915_READ(GEN6_UCGCTL1) | + GEN6_CSUNIT_CLOCK_GATE_DISABLE)); + + config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs, + dev_priv->perf.oa.mux_regs_len); + config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs, + dev_priv->perf.oa.b_counter_regs_len); + + return 0; +} + +static void hsw_disable_metric_set(struct drm_i915_private *dev_priv) +{ + I915_WRITE(GEN6_UCGCTL1, (I915_READ(GEN6_UCGCTL1) & + ~GEN6_CSUNIT_CLOCK_GATE_DISABLE)); + I915_WRITE(GEN7_MISCCPCTL, (I915_READ(GEN7_MISCCPCTL) | + GEN7_DOP_CLOCK_GATE_ENABLE)); + + I915_WRITE(GDT_CHICKEN_BITS, (I915_READ(GDT_CHICKEN_BITS) & + ~GT_NOA_ENABLE)); +} + +static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv) +{ + assert_spin_locked(&dev_priv->perf.hook_lock); + + if (dev_priv->perf.oa.exclusive_stream->enabled) { + unsigned long ctx_id = 0; + bool pinning_ok = false; + + if (dev_priv->perf.oa.exclusive_stream->ctx && + dev_priv->perf.oa.specific_ctx_id) { + ctx_id = dev_priv->perf.oa.specific_ctx_id; + pinning_ok = true; + } + + if (dev_priv->perf.oa.exclusive_stream->ctx == NULL || + pinning_ok) { + bool periodic = dev_priv->perf.oa.periodic; + u32 period_exponent = dev_priv->perf.oa.period_exponent; + u32 report_format = dev_priv->perf.oa.oa_buffer.format; + + I915_WRITE(GEN7_OACONTROL, + (ctx_id & GEN7_OACONTROL_CTX_MASK) | + (period_exponent << + GEN7_OACONTROL_TIMER_PERIOD_SHIFT) | + (periodic ? + GEN7_OACONTROL_TIMER_ENABLE : 0) | + (report_format << + GEN7_OACONTROL_FORMAT_SHIFT) | + (ctx_id ? + GEN7_OACONTROL_PER_CTX_ENABLE : 0) | + GEN7_OACONTROL_ENABLE); + return; + } + } + + I915_WRITE(GEN7_OACONTROL, 0); +} + +static void gen7_oa_enable(struct drm_i915_private *dev_priv) +{ + unsigned long flags; + u32 oastatus1, tail; + + spin_lock_irqsave(&dev_priv->perf.hook_lock, flags); + gen7_update_oacontrol_locked(dev_priv); + spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags); + + /* Reset the head ptr so we don't forward reports from before now. */ + oastatus1 = I915_READ(GEN7_OASTATUS1); + tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK; + I915_WRITE(GEN7_OASTATUS2, (tail & GEN7_OASTATUS2_HEAD_MASK) | + OA_MEM_SELECT_GGTT); +} + +static void i915_oa_stream_enable(struct i915_perf_stream *stream) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + + dev_priv->perf.oa.ops.oa_enable(dev_priv); + + if (dev_priv->perf.oa.periodic) + hrtimer_start(&dev_priv->perf.oa.poll_check_timer, + ns_to_ktime(POLL_PERIOD), + HRTIMER_MODE_REL_PINNED); +} + +static void gen7_oa_disable(struct drm_i915_private *dev_priv) +{ + I915_WRITE(GEN7_OACONTROL, 0); +} + +static void i915_oa_stream_disable(struct i915_perf_stream *stream) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + + dev_priv->perf.oa.ops.oa_disable(dev_priv); + + if (dev_priv->perf.oa.periodic) + hrtimer_cancel(&dev_priv->perf.oa.poll_check_timer); +} + +static int i915_oa_stream_init(struct i915_perf_stream *stream, + struct drm_i915_perf_open_param *param, + struct perf_open_properties *props) +{ + struct drm_i915_private *dev_priv = stream->dev_priv; + int format_size; + int ret; + + if (!(props->sample_flags & SAMPLE_OA_REPORT)) { + DRM_ERROR("Only OA report sampling supported\n"); + return -EINVAL; + } + + if (!dev_priv->perf.oa.ops.init_oa_buffer) { + DRM_ERROR("OA unit not supported\n"); + return -ENODEV; + } + + /* To avoid the complexity of having to accurately filter + * counter reports and marshal to the appropriate client + * we currently only allow exclusive access + */ + if (dev_priv->perf.oa.exclusive_stream) { + DRM_ERROR("OA unit already in use\n"); + return -EBUSY; + } + + if (!props->metrics_set) { + DRM_ERROR("OA metric set not specified\n"); + return -EINVAL; + } + + if (!props->oa_format) { + DRM_ERROR("OA report format not specified\n"); + return -EINVAL; + } + + stream->sample_size = sizeof(struct drm_i915_perf_record_header); + + format_size = dev_priv->perf.oa.oa_formats[props->oa_format].size; + + stream->sample_flags |= SAMPLE_OA_REPORT; + stream->sample_size += format_size; + + dev_priv->perf.oa.oa_buffer.format_size = format_size; + BUG_ON(dev_priv->perf.oa.oa_buffer.format_size == 0); + + dev_priv->perf.oa.oa_buffer.format = + dev_priv->perf.oa.oa_formats[props->oa_format].format; + + dev_priv->perf.oa.metrics_set = props->metrics_set; + + dev_priv->perf.oa.periodic = props->oa_periodic; + if (dev_priv->perf.oa.periodic) + dev_priv->perf.oa.period_exponent = props->oa_period_exponent; + + ret = alloc_oa_buffer(dev_priv); + if (ret) + return ret; + + dev_priv->perf.oa.exclusive_stream = stream; + + /* PRM - observability performance counters: + * + * OACONTROL, performance counter enable, note: + * + * "When this bit is set, in order to have coherent counts, + * RC6 power state and trunk clock gating must be disabled. + * This can be achieved by programming MMIO registers as + * 0xA094=0 and 0xA090[31]=1" + * + * In our case we are expecting that taking pm + FORCEWAKE + * references will effectively disable RC6. + */ + intel_runtime_pm_get(dev_priv); + intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); + + dev_priv->perf.oa.ops.enable_metric_set(dev_priv); + + stream->destroy = i915_oa_stream_destroy; + stream->enable = i915_oa_stream_enable; + stream->disable = i915_oa_stream_disable; + stream->can_read = i915_oa_can_read; + stream->wait_unlocked = i915_oa_wait_unlocked; + stream->poll_wait = i915_oa_poll_wait; + stream->read = i915_oa_read; + + return 0; +} + +static void gen7_update_hw_ctx_id_locked(struct drm_i915_private *dev_priv, + u32 ctx_id) +{ + assert_spin_locked(&dev_priv->perf.hook_lock); + + dev_priv->perf.oa.specific_ctx_id = ctx_id; + gen7_update_oacontrol_locked(dev_priv); +} + +static void i915_oa_context_pin_notify_locked(struct drm_i915_private *dev_priv, + struct intel_context *context) +{ + assert_spin_locked(&dev_priv->perf.hook_lock); + + if (i915.enable_execlists || + dev_priv->perf.oa.ops.update_hw_ctx_id_locked == NULL) + return; + + if (dev_priv->perf.oa.exclusive_stream && + dev_priv->perf.oa.exclusive_stream->ctx == context) { + struct drm_i915_gem_object *obj = + context->legacy_hw_ctx.rcs_state; + u32 ctx_id = i915_gem_obj_ggtt_offset(obj); + + dev_priv->perf.oa.ops.update_hw_ctx_id_locked(dev_priv, ctx_id); + } +} + +void i915_oa_context_pin_notify(struct drm_i915_private *dev_priv, + struct intel_context *context) +{ + unsigned long flags; + + if (!dev_priv->perf.initialized) + return; + + spin_lock_irqsave(&dev_priv->perf.hook_lock, flags); + i915_oa_context_pin_notify_locked(dev_priv, context); + spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags); +} + static ssize_t i915_perf_read_locked(struct i915_perf_stream *stream, struct file *file, char __user *buf, @@ -86,6 +764,20 @@ static ssize_t i915_perf_read(struct file *file, return ret; } +static enum hrtimer_restart poll_check_timer_cb(struct hrtimer *hrtimer) +{ + struct drm_i915_private *dev_priv = + container_of(hrtimer, typeof(*dev_priv), + perf.oa.poll_check_timer); + + if (!dev_priv->perf.oa.ops.oa_buffer_is_empty(dev_priv)) + wake_up(&dev_priv->perf.oa.poll_wq); + + hrtimer_forward_now(hrtimer, ns_to_ktime(POLL_PERIOD)); + + return HRTIMER_RESTART; +} + static unsigned int i915_perf_poll_locked(struct i915_perf_stream *stream, struct file *file, poll_table *wait) @@ -276,18 +968,18 @@ int i915_perf_open_ioctl_locked(struct drm_device *dev, goto err_ctx; } - stream->sample_flags = props->sample_flags; stream->dev_priv = dev_priv; stream->ctx = specific_ctx; - /* - * TODO: support sampling something - * - * For now this is as far as we can go. + ret = i915_oa_stream_init(stream, param, props); + if (ret) + goto err_alloc; + + /* we avoid simply assigning stream->sample_flags = props->sample_flags + * to have _stream_init check the combination of sample flags more + * thoroughly, but still this is the expected result at this point. */ - DRM_ERROR("Unsupported i915 perf stream configuration\n"); - ret = -EINVAL; - goto err_alloc; + BUG_ON(stream->sample_flags != props->sample_flags); list_add(&stream->link, &dev_priv->perf.streams); @@ -372,7 +1064,53 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv, props->single_context = 1; props->ctx_handle = value; break; - + case DRM_I915_PERF_SAMPLE_OA_PROP: + props->sample_flags |= SAMPLE_OA_REPORT; + break; + case DRM_I915_PERF_OA_METRICS_SET_PROP: + if (value == 0 || + value > dev_priv->perf.oa.n_builtin_sets) { + DRM_ERROR("Unknown OA metric set ID"); + return -EINVAL; + } + props->metrics_set = value; + break; + case DRM_I915_PERF_OA_FORMAT_PROP: + if (value == 0 || value >= I915_OA_FORMAT_MAX) { + DRM_ERROR("Invalid OA report format\n"); + return -EINVAL; + } + if (!dev_priv->perf.oa.oa_formats[value].size) { + DRM_ERROR("Invalid OA report format\n"); + return -EINVAL; + } + props->oa_format = value; + break; + case DRM_I915_PERF_OA_EXPONENT_PROP: + if (value > OA_EXPONENT_MAX) + return -EINVAL; + + /* NB: The exponent represents a period as follows: + * + * 80ns * 2^(period_exponent + 1) + * + * Theoretically we can program the OA unit to sample + * every 160ns but don't allow that by default unless + * root. + * + * Referring to perf's + * kernel.perf_event_max_sample_rate for a precedent + * (100000 by default); with an OA exponent of 6 we get + * a period of 10.240 microseconds -just under 100000Hz + */ + if (value < 6 && !capable(CAP_SYS_ADMIN)) { + DRM_ERROR("Sampling period too high without root privileges\n"); + return -EACCES; + } + + props->oa_periodic = true; + props->oa_period_exponent = value; + break; case DRM_I915_PERF_PROP_MAX: BUG(); } @@ -423,8 +1161,33 @@ void i915_perf_init(struct drm_device *dev) { struct drm_i915_private *dev_priv = to_i915(dev); + if (!IS_HASWELL(dev)) + return; + + hrtimer_init(&dev_priv->perf.oa.poll_check_timer, + CLOCK_MONOTONIC, HRTIMER_MODE_REL); + dev_priv->perf.oa.poll_check_timer.function = poll_check_timer_cb; + init_waitqueue_head(&dev_priv->perf.oa.poll_wq); + INIT_LIST_HEAD(&dev_priv->perf.streams); mutex_init(&dev_priv->perf.lock); + spin_lock_init(&dev_priv->perf.hook_lock); + + dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer; + dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set; + dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set; + dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable; + dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable; + dev_priv->perf.oa.ops.update_hw_ctx_id_locked = gen7_update_hw_ctx_id_locked; + dev_priv->perf.oa.ops.read = gen7_oa_read; + dev_priv->perf.oa.ops.oa_buffer_is_empty = gen7_oa_buffer_is_empty; + + dev_priv->perf.oa.oa_formats = hsw_oa_formats; + + dev_priv->perf.oa.n_builtin_sets = + i915_oa_n_builtin_metric_sets_hsw; + + dev_priv->perf.oa.oa_formats = hsw_oa_formats; dev_priv->perf.initialized = true; } @@ -436,7 +1199,7 @@ void i915_perf_fini(struct drm_device *dev) if (!dev_priv->perf.initialized) return; - /* Currently nothing to clean up */ + dev_priv->perf.oa.ops.init_oa_buffer = NULL; dev_priv->perf.initialized = false; } diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 119401c..5ab95e0 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -587,6 +587,343 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg) #define GEN7_GPGPU_DISPATCHDIMZ _MMIO(0x2508) #define GEN7_OACONTROL _MMIO(0x2360) +#define GEN7_OACONTROL_CTX_MASK 0xFFFFF000 +#define GEN7_OACONTROL_TIMER_PERIOD_MASK 0x3F +#define GEN7_OACONTROL_TIMER_PERIOD_SHIFT 6 +#define GEN7_OACONTROL_TIMER_ENABLE (1<<5) +#define GEN7_OACONTROL_FORMAT_A13 (0<<2) +#define GEN7_OACONTROL_FORMAT_A29 (1<<2) +#define GEN7_OACONTROL_FORMAT_A13_B8_C8 (2<<2) +#define GEN7_OACONTROL_FORMAT_A29_B8_C8 (3<<2) +#define GEN7_OACONTROL_FORMAT_B4_C8 (4<<2) +#define GEN7_OACONTROL_FORMAT_A45_B8_C8 (5<<2) +#define GEN7_OACONTROL_FORMAT_B4_C8_A16 (6<<2) +#define GEN7_OACONTROL_FORMAT_C4_B8 (7<<2) +#define GEN7_OACONTROL_FORMAT_SHIFT 2 +#define GEN7_OACONTROL_PER_CTX_ENABLE (1<<1) +#define GEN7_OACONTROL_ENABLE (1<<0) + +#define GEN8_OACTXID _MMIO(0x2364) + +#define GEN8_OACONTROL _MMIO(0x2B00) +#define GEN8_OA_REPORT_FORMAT_A12 (0<<2) +#define GEN8_OA_REPORT_FORMAT_A12_B8_C8 (2<<2) +#define GEN8_OA_REPORT_FORMAT_A36_B8_C8 (5<<2) +#define GEN8_OA_REPORT_FORMAT_C4_B8 (7<<2) +#define GEN8_OA_REPORT_FORMAT_SHIFT 2 +#define GEN8_OA_SPECIFIC_CONTEXT_ENABLE (1<<1) +#define GEN8_OA_COUNTER_ENABLE (1<<0) + +#define GEN8_OACTXCONTROL _MMIO(0x2360) +#define GEN8_OA_TIMER_PERIOD_MASK 0x3F +#define GEN8_OA_TIMER_PERIOD_SHIFT 2 +#define GEN8_OA_TIMER_ENABLE (1<<1) +#define GEN8_OA_COUNTER_RESUME (1<<0) + +#define GEN7_OABUFFER _MMIO(0x23B0) /* R/W */ +#define GEN7_OABUFFER_OVERRUN_DISABLE (1<<3) +#define GEN7_OABUFFER_EDGE_TRIGGER (1<<2) +#define GEN7_OABUFFER_STOP_RESUME_ENABLE (1<<1) +#define GEN7_OABUFFER_RESUME (1<<0) + +#define GEN8_OABUFFER _MMIO(0x2b14) + +#define GEN7_OASTATUS1 _MMIO(0x2364) +#define GEN7_OASTATUS1_TAIL_MASK 0xffffffc0 +#define GEN7_OASTATUS1_COUNTER_OVERFLOW (1<<2) +#define GEN7_OASTATUS1_OABUFFER_OVERFLOW (1<<1) +#define GEN7_OASTATUS1_REPORT_LOST (1<<0) + +#define GEN7_OASTATUS2 _MMIO(0x2368) +#define GEN7_OASTATUS2_HEAD_MASK 0xffffffc0 + +#define GEN8_OASTATUS _MMIO(0x2b08) +#define GEN8_OASTATUS_OVERRUN_STATUS (1<<3) +#define GEN8_OASTATUS_COUNTER_OVERFLOW (1<<2) +#define GEN8_OASTATUS_OABUFFER_OVERFLOW (1<<1) +#define GEN8_OASTATUS_REPORT_LOST (1<<0) + +#define GEN8_OAHEADPTR _MMIO(0x2B0C) +#define GEN8_OATAILPTR _MMIO(0x2B10) + +#define OABUFFER_SIZE_128K (0<<3) +#define OABUFFER_SIZE_256K (1<<3) +#define OABUFFER_SIZE_512K (2<<3) +#define OABUFFER_SIZE_1M (3<<3) +#define OABUFFER_SIZE_2M (4<<3) +#define OABUFFER_SIZE_4M (5<<3) +#define OABUFFER_SIZE_8M (6<<3) +#define OABUFFER_SIZE_16M (7<<3) + +#define OA_MEM_SELECT_GGTT (1<<0) + +#define EU_PERF_CNTL0 _MMIO(0xe458) + +#define GDT_CHICKEN_BITS _MMIO(0x9840) +#define GT_NOA_ENABLE 0x00000080 + +/* + * OA Boolean state + */ + +#define OAREPORTTRIG1 _MMIO(0x2740) +#define OAREPORTTRIG1_THRESHOLD_MASK 0xffff +#define OAREPORTTRIG1_EDGE_LEVEL_TRIGER_SELECT_MASK 0xffff0000 /* 0=level */ + +#define OAREPORTTRIG2 _MMIO(0x2744) +#define OAREPORTTRIG2_INVERT_A_0 (1<<0) +#define OAREPORTTRIG2_INVERT_A_1 (1<<1) +#define OAREPORTTRIG2_INVERT_A_2 (1<<2) +#define OAREPORTTRIG2_INVERT_A_3 (1<<3) +#define OAREPORTTRIG2_INVERT_A_4 (1<<4) +#define OAREPORTTRIG2_INVERT_A_5 (1<<5) +#define OAREPORTTRIG2_INVERT_A_6 (1<<6) +#define OAREPORTTRIG2_INVERT_A_7 (1<<7) +#define OAREPORTTRIG2_INVERT_A_8 (1<<8) +#define OAREPORTTRIG2_INVERT_A_9 (1<<9) +#define OAREPORTTRIG2_INVERT_A_10 (1<<10) +#define OAREPORTTRIG2_INVERT_A_11 (1<<11) +#define OAREPORTTRIG2_INVERT_A_12 (1<<12) +#define OAREPORTTRIG2_INVERT_A_13 (1<<13) +#define OAREPORTTRIG2_INVERT_A_14 (1<<14) +#define OAREPORTTRIG2_INVERT_A_15 (1<<15) +#define OAREPORTTRIG2_INVERT_B_0 (1<<16) +#define OAREPORTTRIG2_INVERT_B_1 (1<<17) +#define OAREPORTTRIG2_INVERT_B_2 (1<<18) +#define OAREPORTTRIG2_INVERT_B_3 (1<<19) +#define OAREPORTTRIG2_INVERT_C_0 (1<<20) +#define OAREPORTTRIG2_INVERT_C_1 (1<<21) +#define OAREPORTTRIG2_INVERT_D_0 (1<<22) +#define OAREPORTTRIG2_THRESHOLD_ENABLE (1<<23) +#define OAREPORTTRIG2_REPORT_TRIGGER_ENABLE (1<<31) + +#define OAREPORTTRIG3 _MMIO(0x2748) +#define OAREPORTTRIG3_NOA_SELECT_MASK 0xf +#define OAREPORTTRIG3_NOA_SELECT_8_SHIFT 0 +#define OAREPORTTRIG3_NOA_SELECT_9_SHIFT 4 +#define OAREPORTTRIG3_NOA_SELECT_10_SHIFT 8 +#define OAREPORTTRIG3_NOA_SELECT_11_SHIFT 12 +#define OAREPORTTRIG3_NOA_SELECT_12_SHIFT 16 +#define OAREPORTTRIG3_NOA_SELECT_13_SHIFT 20 +#define OAREPORTTRIG3_NOA_SELECT_14_SHIFT 24 +#define OAREPORTTRIG3_NOA_SELECT_15_SHIFT 28 + +#define OAREPORTTRIG4 _MMIO(0x274c) +#define OAREPORTTRIG4_NOA_SELECT_MASK 0xf +#define OAREPORTTRIG4_NOA_SELECT_0_SHIFT 0 +#define OAREPORTTRIG4_NOA_SELECT_1_SHIFT 4 +#define OAREPORTTRIG4_NOA_SELECT_2_SHIFT 8 +#define OAREPORTTRIG4_NOA_SELECT_3_SHIFT 12 +#define OAREPORTTRIG4_NOA_SELECT_4_SHIFT 16 +#define OAREPORTTRIG4_NOA_SELECT_5_SHIFT 20 +#define OAREPORTTRIG4_NOA_SELECT_6_SHIFT 24 +#define OAREPORTTRIG4_NOA_SELECT_7_SHIFT 28 + +#define OAREPORTTRIG5 _MMIO(0x2750) +#define OAREPORTTRIG5_THRESHOLD_MASK 0xffff +#define OAREPORTTRIG5_EDGE_LEVEL_TRIGER_SELECT_MASK 0xffff0000 /* 0=level */ + +#define OAREPORTTRIG6 _MMIO(0x2754) +#define OAREPORTTRIG6_INVERT_A_0 (1<<0) +#define OAREPORTTRIG6_INVERT_A_1 (1<<1) +#define OAREPORTTRIG6_INVERT_A_2 (1<<2) +#define OAREPORTTRIG6_INVERT_A_3 (1<<3) +#define OAREPORTTRIG6_INVERT_A_4 (1<<4) +#define OAREPORTTRIG6_INVERT_A_5 (1<<5) +#define OAREPORTTRIG6_INVERT_A_6 (1<<6) +#define OAREPORTTRIG6_INVERT_A_7 (1<<7) +#define OAREPORTTRIG6_INVERT_A_8 (1<<8) +#define OAREPORTTRIG6_INVERT_A_9 (1<<9) +#define OAREPORTTRIG6_INVERT_A_10 (1<<10) +#define OAREPORTTRIG6_INVERT_A_11 (1<<11) +#define OAREPORTTRIG6_INVERT_A_12 (1<<12) +#define OAREPORTTRIG6_INVERT_A_13 (1<<13) +#define OAREPORTTRIG6_INVERT_A_14 (1<<14) +#define OAREPORTTRIG6_INVERT_A_15 (1<<15) +#define OAREPORTTRIG6_INVERT_B_0 (1<<16) +#define OAREPORTTRIG6_INVERT_B_1 (1<<17) +#define OAREPORTTRIG6_INVERT_B_2 (1<<18) +#define OAREPORTTRIG6_INVERT_B_3 (1<<19) +#define OAREPORTTRIG6_INVERT_C_0 (1<<20) +#define OAREPORTTRIG6_INVERT_C_1 (1<<21) +#define OAREPORTTRIG6_INVERT_D_0 (1<<22) +#define OAREPORTTRIG6_THRESHOLD_ENABLE (1<<23) +#define OAREPORTTRIG6_REPORT_TRIGGER_ENABLE (1<<31) + +#define OAREPORTTRIG7 _MMIO(0x2758) +#define OAREPORTTRIG7_NOA_SELECT_MASK 0xf +#define OAREPORTTRIG7_NOA_SELECT_8_SHIFT 0 +#define OAREPORTTRIG7_NOA_SELECT_9_SHIFT 4 +#define OAREPORTTRIG7_NOA_SELECT_10_SHIFT 8 +#define OAREPORTTRIG7_NOA_SELECT_11_SHIFT 12 +#define OAREPORTTRIG7_NOA_SELECT_12_SHIFT 16 +#define OAREPORTTRIG7_NOA_SELECT_13_SHIFT 20 +#define OAREPORTTRIG7_NOA_SELECT_14_SHIFT 24 +#define OAREPORTTRIG7_NOA_SELECT_15_SHIFT 28 + +#define OAREPORTTRIG8 _MMIO(0x275c) +#define OAREPORTTRIG8_NOA_SELECT_MASK 0xf +#define OAREPORTTRIG8_NOA_SELECT_0_SHIFT 0 +#define OAREPORTTRIG8_NOA_SELECT_1_SHIFT 4 +#define OAREPORTTRIG8_NOA_SELECT_2_SHIFT 8 +#define OAREPORTTRIG8_NOA_SELECT_3_SHIFT 12 +#define OAREPORTTRIG8_NOA_SELECT_4_SHIFT 16 +#define OAREPORTTRIG8_NOA_SELECT_5_SHIFT 20 +#define OAREPORTTRIG8_NOA_SELECT_6_SHIFT 24 +#define OAREPORTTRIG8_NOA_SELECT_7_SHIFT 28 + +#define OASTARTTRIG1 _MMIO(0x2710) +#define OASTARTTRIG1_THRESHOLD_COUNT_MASK_MBZ 0xffff0000 +#define OASTARTTRIG1_THRESHOLD_MASK 0xffff + +#define OASTARTTRIG2 _MMIO(0x2714) +#define OASTARTTRIG2_INVERT_A_0 (1<<0) +#define OASTARTTRIG2_INVERT_A_1 (1<<1) +#define OASTARTTRIG2_INVERT_A_2 (1<<2) +#define OASTARTTRIG2_INVERT_A_3 (1<<3) +#define OASTARTTRIG2_INVERT_A_4 (1<<4) +#define OASTARTTRIG2_INVERT_A_5 (1<<5) +#define OASTARTTRIG2_INVERT_A_6 (1<<6) +#define OASTARTTRIG2_INVERT_A_7 (1<<7) +#define OASTARTTRIG2_INVERT_A_8 (1<<8) +#define OASTARTTRIG2_INVERT_A_9 (1<<9) +#define OASTARTTRIG2_INVERT_A_10 (1<<10) +#define OASTARTTRIG2_INVERT_A_11 (1<<11) +#define OASTARTTRIG2_INVERT_A_12 (1<<12) +#define OASTARTTRIG2_INVERT_A_13 (1<<13) +#define OASTARTTRIG2_INVERT_A_14 (1<<14) +#define OASTARTTRIG2_INVERT_A_15 (1<<15) +#define OASTARTTRIG2_INVERT_B_0 (1<<16) +#define OASTARTTRIG2_INVERT_B_1 (1<<17) +#define OASTARTTRIG2_INVERT_B_2 (1<<18) +#define OASTARTTRIG2_INVERT_B_3 (1<<19) +#define OASTARTTRIG2_INVERT_C_0 (1<<20) +#define OASTARTTRIG2_INVERT_C_1 (1<<21) +#define OASTARTTRIG2_INVERT_D_0 (1<<22) +#define OASTARTTRIG2_THRESHOLD_ENABLE (1<<23) +#define OASTARTTRIG2_START_TRIG_FLAG_MBZ (1<<24) +#define OASTARTTRIG2_EVENT_SELECT_0 (1<<28) +#define OASTARTTRIG2_EVENT_SELECT_1 (1<<29) +#define OASTARTTRIG2_EVENT_SELECT_2 (1<<30) +#define OASTARTTRIG2_EVENT_SELECT_3 (1<<31) + +#define OASTARTTRIG3 _MMIO(0x2718) +#define OASTARTTRIG3_NOA_SELECT_MASK 0xf +#define OASTARTTRIG3_NOA_SELECT_8_SHIFT 0 +#define OASTARTTRIG3_NOA_SELECT_9_SHIFT 4 +#define OASTARTTRIG3_NOA_SELECT_10_SHIFT 8 +#define OASTARTTRIG3_NOA_SELECT_11_SHIFT 12 +#define OASTARTTRIG3_NOA_SELECT_12_SHIFT 16 +#define OASTARTTRIG3_NOA_SELECT_13_SHIFT 20 +#define OASTARTTRIG3_NOA_SELECT_14_SHIFT 24 +#define OASTARTTRIG3_NOA_SELECT_15_SHIFT 28 + +#define OASTARTTRIG4 _MMIO(0x271c) +#define OASTARTTRIG4_NOA_SELECT_MASK 0xf +#define OASTARTTRIG4_NOA_SELECT_0_SHIFT 0 +#define OASTARTTRIG4_NOA_SELECT_1_SHIFT 4 +#define OASTARTTRIG4_NOA_SELECT_2_SHIFT 8 +#define OASTARTTRIG4_NOA_SELECT_3_SHIFT 12 +#define OASTARTTRIG4_NOA_SELECT_4_SHIFT 16 +#define OASTARTTRIG4_NOA_SELECT_5_SHIFT 20 +#define OASTARTTRIG4_NOA_SELECT_6_SHIFT 24 +#define OASTARTTRIG4_NOA_SELECT_7_SHIFT 28 + +#define OASTARTTRIG5 _MMIO(0x2720) +#define OASTARTTRIG5_THRESHOLD_COUNT_MASK_MBZ 0xffff0000 +#define OASTARTTRIG5_THRESHOLD_MASK 0xffff + +#define OASTARTTRIG6 _MMIO(0x2724) +#define OASTARTTRIG6_INVERT_A_0 (1<<0) +#define OASTARTTRIG6_INVERT_A_1 (1<<1) +#define OASTARTTRIG6_INVERT_A_2 (1<<2) +#define OASTARTTRIG6_INVERT_A_3 (1<<3) +#define OASTARTTRIG6_INVERT_A_4 (1<<4) +#define OASTARTTRIG6_INVERT_A_5 (1<<5) +#define OASTARTTRIG6_INVERT_A_6 (1<<6) +#define OASTARTTRIG6_INVERT_A_7 (1<<7) +#define OASTARTTRIG6_INVERT_A_8 (1<<8) +#define OASTARTTRIG6_INVERT_A_9 (1<<9) +#define OASTARTTRIG6_INVERT_A_10 (1<<10) +#define OASTARTTRIG6_INVERT_A_11 (1<<11) +#define OASTARTTRIG6_INVERT_A_12 (1<<12) +#define OASTARTTRIG6_INVERT_A_13 (1<<13) +#define OASTARTTRIG6_INVERT_A_14 (1<<14) +#define OASTARTTRIG6_INVERT_A_15 (1<<15) +#define OASTARTTRIG6_INVERT_B_0 (1<<16) +#define OASTARTTRIG6_INVERT_B_1 (1<<17) +#define OASTARTTRIG6_INVERT_B_2 (1<<18) +#define OASTARTTRIG6_INVERT_B_3 (1<<19) +#define OASTARTTRIG6_INVERT_C_0 (1<<20) +#define OASTARTTRIG6_INVERT_C_1 (1<<21) +#define OASTARTTRIG6_INVERT_D_0 (1<<22) +#define OASTARTTRIG6_THRESHOLD_ENABLE (1<<23) +#define OASTARTTRIG6_START_TRIG_FLAG_MBZ (1<<24) +#define OASTARTTRIG6_EVENT_SELECT_4 (1<<28) +#define OASTARTTRIG6_EVENT_SELECT_5 (1<<29) +#define OASTARTTRIG6_EVENT_SELECT_6 (1<<30) +#define OASTARTTRIG6_EVENT_SELECT_7 (1<<31) + +#define OASTARTTRIG7 _MMIO(0x2728) +#define OASTARTTRIG7_NOA_SELECT_MASK 0xf +#define OASTARTTRIG7_NOA_SELECT_8_SHIFT 0 +#define OASTARTTRIG7_NOA_SELECT_9_SHIFT 4 +#define OASTARTTRIG7_NOA_SELECT_10_SHIFT 8 +#define OASTARTTRIG7_NOA_SELECT_11_SHIFT 12 +#define OASTARTTRIG7_NOA_SELECT_12_SHIFT 16 +#define OASTARTTRIG7_NOA_SELECT_13_SHIFT 20 +#define OASTARTTRIG7_NOA_SELECT_14_SHIFT 24 +#define OASTARTTRIG7_NOA_SELECT_15_SHIFT 28 + +#define OASTARTTRIG8 _MMIO(0x272c) +#define OASTARTTRIG8_NOA_SELECT_MASK 0xf +#define OASTARTTRIG8_NOA_SELECT_0_SHIFT 0 +#define OASTARTTRIG8_NOA_SELECT_1_SHIFT 4 +#define OASTARTTRIG8_NOA_SELECT_2_SHIFT 8 +#define OASTARTTRIG8_NOA_SELECT_3_SHIFT 12 +#define OASTARTTRIG8_NOA_SELECT_4_SHIFT 16 +#define OASTARTTRIG8_NOA_SELECT_5_SHIFT 20 +#define OASTARTTRIG8_NOA_SELECT_6_SHIFT 24 +#define OASTARTTRIG8_NOA_SELECT_7_SHIFT 28 + +/* CECX_0 */ +#define OACEC_COMPARE_LESS_OR_EQUAL 6 +#define OACEC_COMPARE_NOT_EQUAL 5 +#define OACEC_COMPARE_LESS_THAN 4 +#define OACEC_COMPARE_GREATER_OR_EQUAL 3 +#define OACEC_COMPARE_EQUAL 2 +#define OACEC_COMPARE_GREATER_THAN 1 +#define OACEC_COMPARE_ANY_EQUAL 0 + +#define OACEC_COMPARE_VALUE_MASK 0xffff +#define OACEC_COMPARE_VALUE_SHIFT 3 + +#define OACEC_SELECT_NOA (0<<19) +#define OACEC_SELECT_PREV (1<<19) +#define OACEC_SELECT_BOOLEAN (2<<19) + +/* CECX_1 */ +#define OACEC_MASK_MASK 0xffff +#define OACEC_CONSIDERATIONS_MASK 0xffff +#define OACEC_CONSIDERATIONS_SHIFT 16 + +#define OACEC0_0 _MMIO(0x2770) +#define OACEC0_1 _MMIO(0x2774) +#define OACEC1_0 _MMIO(0x2778) +#define OACEC1_1 _MMIO(0x277c) +#define OACEC2_0 _MMIO(0x2780) +#define OACEC2_1 _MMIO(0x2784) +#define OACEC3_0 _MMIO(0x2788) +#define OACEC3_1 _MMIO(0x278c) +#define OACEC4_0 _MMIO(0x2790) +#define OACEC4_1 _MMIO(0x2794) +#define OACEC5_0 _MMIO(0x2798) +#define OACEC5_1 _MMIO(0x279c) +#define OACEC6_0 _MMIO(0x27a0) +#define OACEC6_1 _MMIO(0x27a4) +#define OACEC7_0 _MMIO(0x27a8) +#define OACEC7_1 _MMIO(0x27ac) + #define _GEN7_PIPEA_DE_LOAD_SL 0x70068 #define _GEN7_PIPEB_DE_LOAD_SL 0x71068 @@ -6849,6 +7186,7 @@ enum skl_disp_power_wells { # define GEN6_RCCUNIT_CLOCK_GATE_DISABLE (1 << 11) #define GEN6_UCGCTL3 _MMIO(0x9408) +# define GEN6_OACSUNIT_CLOCK_GATE_DISABLE (1 << 20) #define GEN7_UCGCTL4 _MMIO(0x940c) #define GEN7_L3BANK2X_CLOCK_GATE_DISABLE (1<<25) diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 68ca26e..c8fc556 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -1172,6 +1172,18 @@ struct drm_i915_gem_context_param { __u64 value; }; +enum drm_i915_oa_format { + I915_OA_FORMAT_A13 = 1, + I915_OA_FORMAT_A29, + I915_OA_FORMAT_A13_B8_C8, + I915_OA_FORMAT_B4_C8, + I915_OA_FORMAT_A45_B8_C8, + I915_OA_FORMAT_B4_C8_A16, + I915_OA_FORMAT_C4_B8, + + I915_OA_FORMAT_MAX /* non-ABI */ +}; + #define I915_PERF_FLAG_FD_CLOEXEC (1<<0) #define I915_PERF_FLAG_FD_NONBLOCK (1<<1) #define I915_PERF_FLAG_DISABLED (1<<2) @@ -1184,6 +1196,32 @@ enum drm_i915_perf_property_id { */ DRM_I915_PERF_CTX_HANDLE_PROP = 1, + /** + * A value of 1 requests the inclusion of raw OA unit reports as + * part of stream samples. + */ + DRM_I915_PERF_SAMPLE_OA_PROP, + + /** + * The value specifies which set of OA unit metrics should be + * be configured, defining the contents of any OA unit reports. + */ + DRM_I915_PERF_OA_METRICS_SET_PROP, + + /** + * The value specifies the size and layout of OA unit reports. + */ + DRM_I915_PERF_OA_FORMAT_PROP, + + /** + * Specifying this property implicitly requests periodic OA unit + * sampling and (at least on Haswell) the sampling frequency is derived + * from this exponent as follows: + * + * 80ns * 2^(period_exponent + 1) + */ + DRM_I915_PERF_OA_EXPONENT_PROP, + DRM_I915_PERF_PROP_MAX /* non-ABI */ }; @@ -1225,17 +1263,31 @@ enum drm_i915_perf_record_type { * every sample. * * The order of these sample properties given by userspace has no - * affect on the ordering of data within a sample. The order will be + * affect on the ordering of data within a sample. The order is * documented here. * * struct { * struct drm_i915_perf_record_header header; * - * TODO: itemize extensible sample data here + * { u32 oa_report[]; } && DRM_I915_PERF_SAMPLE_OA_PROP * }; */ DRM_I915_PERF_RECORD_SAMPLE = 1, + /* + * Indicates that one or more OA reports was not written + * by the hardware. + */ + DRM_I915_PERF_RECORD_OA_REPORT_LOST = 2, + + /* + * Indicates that the internal circular buffer that Gen + * graphics writes OA reports into has filled, which may + * either mean that old reports could be overwritten or + * subsequent reports lost until the buffer is cleared. + */ + DRM_I915_PERF_RECORD_OA_BUFFER_OVERFLOW = 3, + DRM_I915_PERF_RECORD_MAX /* non-ABI */ };