From patchwork Wed Nov 8 23:59:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: oscar.mateo@intel.com X-Patchwork-Id: 10049725 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 036F260223 for ; Wed, 8 Nov 2017 23:59:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E58B6297DF for ; Wed, 8 Nov 2017 23:59:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D81B9297EB; Wed, 8 Nov 2017 23:59:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6CE2C297DF for ; Wed, 8 Nov 2017 23:59:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0D8D36E7EF; Wed, 8 Nov 2017 23:59:40 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id 120E36E7D7 for ; Wed, 8 Nov 2017 23:59:32 +0000 (UTC) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Nov 2017 15:59:31 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,365,1505804400"; d="scan'208";a="147679261" Received: from omateolo-linux.fm.intel.com ([10.1.27.13]) by orsmga004.jf.intel.com with ESMTP; 08 Nov 2017 15:59:31 -0800 From: Oscar Mateo To: intel-gfx@lists.freedesktop.org Date: Wed, 8 Nov 2017 15:59:46 -0800 Message-Id: <1510185589-9100-4-git-send-email-oscar.mateo@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1510185589-9100-1-git-send-email-oscar.mateo@intel.com> References: <1510185589-9100-1-git-send-email-oscar.mateo@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 3/6] drm/i915: Split out functions for different kinds of workarounds X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP There are different kind of workarounds (those that modify registers that live in the context image, those that modify global registers, those that whitelist registers, etc...) and they have different requirements in terms of where they are applied and how. Also, by splitting them apart, it should be easier to decide where a new workaround should go. v2: - Add multiple MISSING_CASE - Rebased v3: - Rename mmio_workarounds to gt_workarounds (Chris, Mika) - Create empty placeholders for BDW and CHV GT WAs - Rebased Signed-off-by: Oscar Mateo Cc: Chris Wilson Cc: Mika Kuoppala Cc: Ville Syrjälä --- drivers/gpu/drm/i915/i915_gem.c | 3 + drivers/gpu/drm/i915/i915_gem_context.c | 5 + drivers/gpu/drm/i915/intel_lrc.c | 10 +- drivers/gpu/drm/i915/intel_ringbuffer.c | 4 +- drivers/gpu/drm/i915/intel_workarounds.c | 714 +++++++++++++++++++------------ drivers/gpu/drm/i915/intel_workarounds.h | 8 +- 6 files changed, 458 insertions(+), 286 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 889ae88..08407f4 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -35,6 +35,7 @@ #include "intel_drv.h" #include "intel_frontbuffer.h" #include "intel_mocs.h" +#include "intel_workarounds.h" #include "i915_gemfs.h" #include #include @@ -4915,6 +4916,8 @@ int i915_gem_init_hw(struct drm_i915_private *dev_priv) } } + intel_gt_workarounds_apply(dev_priv); + i915_gem_init_swizzling(dev_priv); /* diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 10affb3..8548e571 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -90,6 +90,7 @@ #include #include "i915_drv.h" #include "i915_trace.h" +#include "intel_workarounds.h" #define ALL_L3_SLICES(dev) (1 << NUM_L3_SLICES(dev)) - 1 @@ -456,6 +457,10 @@ int i915_gem_contexts_init(struct drm_i915_private *dev_priv) GEM_BUG_ON(dev_priv->kernel_context); + err = intel_ctx_workarounds_init(dev_priv); + if (err) + goto err; + INIT_LIST_HEAD(&dev_priv->contexts.list); INIT_WORK(&dev_priv->contexts.free_work, contexts_free_worker); init_llist_head(&dev_priv->contexts.free_list); diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 911df0c..f0b4d2f 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1503,7 +1503,7 @@ static int gen8_init_render_ring(struct intel_engine_cs *engine) I915_WRITE(INSTPM, _MASKED_BIT_ENABLE(INSTPM_FORCE_ORDERING)); - return init_workarounds_ring(engine); + return 0; } static int gen9_init_render_ring(struct intel_engine_cs *engine) @@ -1514,7 +1514,11 @@ static int gen9_init_render_ring(struct intel_engine_cs *engine) if (ret) return ret; - return init_workarounds_ring(engine); + ret = intel_whitelist_workarounds_apply(engine); + if (ret) + return ret; + + return 0; } static void reset_common_ring(struct intel_engine_cs *engine, @@ -1830,7 +1834,7 @@ static int gen8_init_rcs_context(struct drm_i915_gem_request *req) { int ret; - ret = intel_ring_workarounds_emit(req); + ret = intel_ctx_workarounds_emit(req); if (ret) return ret; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 1c721b2..b053fed 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -649,7 +649,7 @@ static int intel_rcs_ctx_init(struct drm_i915_gem_request *req) { int ret; - ret = intel_ring_workarounds_emit(req); + ret = intel_ctx_workarounds_emit(req); if (ret != 0) return ret; @@ -708,7 +708,7 @@ static int init_render_ring(struct intel_engine_cs *engine) if (INTEL_INFO(dev_priv)->gen >= 6) I915_WRITE_IMR(engine, ~engine->irq_keep_mask); - return init_workarounds_ring(engine); + return 0; } static void render_ring_cleanup(struct intel_engine_cs *engine) diff --git a/drivers/gpu/drm/i915/intel_workarounds.c b/drivers/gpu/drm/i915/intel_workarounds.c index 5f597d1..0a8f265 100644 --- a/drivers/gpu/drm/i915/intel_workarounds.c +++ b/drivers/gpu/drm/i915/intel_workarounds.c @@ -58,27 +58,8 @@ static int wa_add(struct drm_i915_private *dev_priv, #define WA_SET_FIELD_MASKED(addr, mask, value) \ WA_REG(addr, mask, _MASKED_FIELD(mask, value)) -static int wa_ring_whitelist_reg(struct intel_engine_cs *engine, - i915_reg_t reg) -{ - struct drm_i915_private *dev_priv = engine->i915; - struct i915_workarounds *wa = &dev_priv->workarounds; - const uint32_t index = wa->hw_whitelist_count[engine->id]; - - if (WARN_ON(index >= RING_MAX_NONPRIV_SLOTS)) - return -EINVAL; - - I915_WRITE(RING_FORCE_TO_NONPRIV(engine->mmio_base, index), - i915_mmio_reg_offset(reg)); - wa->hw_whitelist_count[engine->id]++; - - return 0; -} - -static int gen8_init_workarounds(struct intel_engine_cs *engine) +static int gen8_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; - WA_SET_BIT_MASKED(INSTPM, INSTPM_FORCE_ORDERING); /* WaDisableAsyncFlipPerfMode:bdw,chv */ @@ -126,12 +107,11 @@ static int gen8_init_workarounds(struct intel_engine_cs *engine) return 0; } -static int bdw_init_workarounds(struct intel_engine_cs *engine) +static int bdw_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; int ret; - ret = gen8_init_workarounds(engine); + ret = gen8_ctx_workarounds_init(dev_priv); if (ret) return ret; @@ -158,12 +138,11 @@ static int bdw_init_workarounds(struct intel_engine_cs *engine) return 0; } -static int chv_init_workarounds(struct intel_engine_cs *engine) +static int chv_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; int ret; - ret = gen8_init_workarounds(engine); + ret = gen8_ctx_workarounds_init(dev_priv); if (ret) return ret; @@ -176,23 +155,8 @@ static int chv_init_workarounds(struct intel_engine_cs *engine) return 0; } -static int gen9_init_workarounds(struct intel_engine_cs *engine) +static int gen9_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; - int ret; - - /* WaConextSwitchWithConcurrentTLBInvalidate:skl,bxt,kbl,glk,cfl */ - I915_WRITE(GEN9_CSFE_CHICKEN1_RCS, _MASKED_BIT_ENABLE(GEN9_PREEMPT_GPGPU_SYNC_SWITCH_DISABLE)); - - /* WaEnableLbsSlaRetryTimerDecrement:skl,bxt,kbl,glk,cfl */ - I915_WRITE(BDW_SCRATCH1, I915_READ(BDW_SCRATCH1) | - GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE); - - /* WaDisableKillLogic:bxt,skl,kbl */ - if (!IS_COFFEELAKE(dev_priv)) - I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | - ECOCHK_DIS_TLB); - if (HAS_LLC(dev_priv)) { /* WaCompressedResourceSamplerPbeMediaNewHashMode:skl,kbl * @@ -203,11 +167,6 @@ static int gen9_init_workarounds(struct intel_engine_cs *engine) GEN9_PBE_COMPRESSED_HASH_SELECTION); WA_SET_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN7, GEN9_SAMPLER_HASH_COMPRESSED_READ_ADDR); - - I915_WRITE(MMCD_MISC_CTRL, - I915_READ(MMCD_MISC_CTRL) | - MMCD_PCLA | - MMCD_HOTSPOT_EN); } /* WaClearFlowControlGpgpuContextSave:skl,bxt,kbl,glk,cfl */ @@ -279,10 +238,6 @@ static int gen9_init_workarounds(struct intel_engine_cs *engine) WA_SET_BIT_MASKED(HDC_CHICKEN0, HDC_FORCE_NON_COHERENT); - /* WaDisableHDCInvalidation:skl,bxt,kbl,cfl */ - I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | - BDW_DISABLE_HDC_INVALIDATION); - /* WaDisableSamplerPowerBypassForSOPingPong:skl,bxt,kbl,cfl */ if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv) || @@ -294,10 +249,6 @@ static int gen9_init_workarounds(struct intel_engine_cs *engine) /* WaDisableSTUnitPowerOptimization:skl,bxt,kbl,glk,cfl */ WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN2, GEN8_ST_PO_DISABLE); - /* WaOCLCoherentLineFlush:skl,bxt,kbl,cfl */ - I915_WRITE(GEN8_L3SQCREG4, (I915_READ(GEN8_L3SQCREG4) | - GEN8_LQSC_FLUSH_COHERENT_LINES)); - /* * Supporting preemption with fine-granularity requires changes in the * batch buffer programming. Since we can't break old userspace, we @@ -316,29 +267,11 @@ static int gen9_init_workarounds(struct intel_engine_cs *engine) WA_SET_FIELD_MASKED(GEN8_CS_CHICKEN1, GEN9_PREEMPT_GPGPU_LEVEL_MASK, GEN9_PREEMPT_GPGPU_COMMAND_LEVEL); - /* WaVFEStateAfterPipeControlwithMediaStateClear:skl,bxt,glk,cfl */ - ret = wa_ring_whitelist_reg(engine, GEN9_CTX_PREEMPT_REG); - if (ret) - return ret; - - /* WaEnablePreemptionGranularityControlByUMD:skl,bxt,kbl,cfl,[cnl] */ - I915_WRITE(GEN7_FF_SLICE_CS_CHICKEN1, - _MASKED_BIT_ENABLE(GEN9_FFSC_PERCTX_PREEMPT_CTRL)); - ret = wa_ring_whitelist_reg(engine, GEN8_CS_CHICKEN1); - if (ret) - return ret; - - /* WaAllowUMDToModifyHDCChicken1:skl,bxt,kbl,glk,cfl */ - ret = wa_ring_whitelist_reg(engine, GEN8_HDC_CHICKEN1); - if (ret) - return ret; - return 0; } -static int skl_tune_iz_hashing(struct intel_engine_cs *engine) +static int skl_tune_iz_hashing(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; u8 vals[3] = { 0, 0, 0 }; unsigned int i; @@ -377,67 +310,29 @@ static int skl_tune_iz_hashing(struct intel_engine_cs *engine) return 0; } -static int skl_init_workarounds(struct intel_engine_cs *engine) +static int skl_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; int ret; - ret = gen9_init_workarounds(engine); - if (ret) - return ret; - - /* WaEnableGapsTsvCreditFix:skl */ - I915_WRITE(GEN8_GARBCNTL, (I915_READ(GEN8_GARBCNTL) | - GEN9_GAPS_TSV_CREDIT_DISABLE)); - - /* WaDisableGafsUnitClkGating:skl */ - I915_WRITE(GEN7_UCGCTL4, (I915_READ(GEN7_UCGCTL4) | - GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE)); - - /* WaInPlaceDecompressionHang:skl */ - if (IS_SKL_REVID(dev_priv, SKL_REVID_H0, REVID_FOREVER)) - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, - (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); - - /* WaDisableLSQCROPERFforOCL:skl */ - ret = wa_ring_whitelist_reg(engine, GEN8_L3SQCREG4); + ret = gen9_ctx_workarounds_init(dev_priv); if (ret) return ret; - return skl_tune_iz_hashing(engine); + return skl_tune_iz_hashing(dev_priv); } -static int bxt_init_workarounds(struct intel_engine_cs *engine) +static int bxt_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; int ret; - ret = gen9_init_workarounds(engine); + ret = gen9_ctx_workarounds_init(dev_priv); if (ret) return ret; - /* WaStoreMultiplePTEenable:bxt */ - /* This is a requirement according to Hardware specification */ - if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) - I915_WRITE(TILECTL, I915_READ(TILECTL) | TILECTL_TLBPF); - - /* WaSetClckGatingDisableMedia:bxt */ - if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) { - I915_WRITE(GEN7_MISCCPCTL, (I915_READ(GEN7_MISCCPCTL) & - ~GEN8_DOP_CLOCK_GATE_MEDIA_ENABLE)); - } - /* WaDisableThreadStallDopClockGating:bxt */ WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN, STALL_DOP_GATING_DISABLE); - /* WaDisablePooledEuLoadBalancingFix:bxt */ - if (IS_BXT_REVID(dev_priv, BXT_REVID_B0, REVID_FOREVER)) { - I915_WRITE(FF_SLICE_CS_CHICKEN2, - _MASKED_BIT_ENABLE(GEN9_POOLED_EU_LOAD_BALANCING_FIX_DISABLE)); - } - /* WaDisableSbeCacheDispatchPortSharing:bxt */ if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_B0)) { WA_SET_BIT_MASKED( @@ -445,117 +340,22 @@ static int bxt_init_workarounds(struct intel_engine_cs *engine) GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE); } - /* WaDisableObjectLevelPreemptionForTrifanOrPolygon:bxt */ - /* WaDisableObjectLevelPreemptionForInstancedDraw:bxt */ - /* WaDisableObjectLevelPreemtionForInstanceId:bxt */ - /* WaDisableLSQCROPERFforOCL:bxt */ - if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) { - ret = wa_ring_whitelist_reg(engine, GEN9_CS_DEBUG_MODE1); - if (ret) - return ret; - - ret = wa_ring_whitelist_reg(engine, GEN8_L3SQCREG4); - if (ret) - return ret; - } - - /* WaProgramL3SqcReg1DefaultForPerf:bxt */ - if (IS_BXT_REVID(dev_priv, BXT_REVID_B0, REVID_FOREVER)) { - u32 val = I915_READ(GEN8_L3SQCREG1); - val &= ~L3_PRIO_CREDITS_MASK; - val |= L3_GENERAL_PRIO_CREDITS(62) | L3_HIGH_PRIO_CREDITS(2); - I915_WRITE(GEN8_L3SQCREG1, val); - } - /* WaToEnableHwFixForPushConstHWBug:bxt */ if (IS_BXT_REVID(dev_priv, BXT_REVID_C0, REVID_FOREVER)) WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2, GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION); - /* WaInPlaceDecompressionHang:bxt */ - if (IS_BXT_REVID(dev_priv, BXT_REVID_C0, REVID_FOREVER)) - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, - (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); - - return 0; -} - -static int cnl_init_workarounds(struct intel_engine_cs *engine) -{ - struct drm_i915_private *dev_priv = engine->i915; - int ret; - - /* WaDisableI2mCycleOnWRPort:cnl (pre-prod) */ - if (IS_CNL_REVID(dev_priv, CNL_REVID_B0, CNL_REVID_B0)) - I915_WRITE(GAMT_CHKN_BIT_REG, - (I915_READ(GAMT_CHKN_BIT_REG) | - GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT)); - - /* WaForceContextSaveRestoreNonCoherent:cnl */ - WA_SET_BIT_MASKED(CNL_HDC_CHICKEN0, - HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT); - - /* WaThrottleEUPerfToAvoidTDBackPressure:cnl(pre-prod) */ - if (IS_CNL_REVID(dev_priv, CNL_REVID_B0, CNL_REVID_B0)) - WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN, THROTTLE_12_5); - - /* WaDisableReplayBufferBankArbitrationOptimization:cnl */ - WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2, - GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION); - - /* WaDisableEnhancedSBEVertexCaching:cnl (pre-prod) */ - if (IS_CNL_REVID(dev_priv, 0, CNL_REVID_B0)) - WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2, - GEN8_CSC2_SBE_VUE_CACHE_CONSERVATIVE); - - /* WaInPlaceDecompressionHang:cnl */ - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, - (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); - - /* WaPushConstantDereferenceHoldDisable:cnl */ - WA_SET_BIT_MASKED(GEN7_ROW_CHICKEN2, PUSH_CONSTANT_DEREF_DISABLE); - - /* FtrEnableFastAnisoL1BankingFix: cnl */ - WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN3, CNL_FAST_ANISO_L1_BANKING_FIX); - - /* WaDisable3DMidCmdPreemption:cnl */ - WA_CLR_BIT_MASKED(GEN8_CS_CHICKEN1, GEN9_PREEMPT_3D_OBJECT_LEVEL); - - /* WaDisableGPGPUMidCmdPreemption:cnl */ - WA_SET_FIELD_MASKED(GEN8_CS_CHICKEN1, GEN9_PREEMPT_GPGPU_LEVEL_MASK, - GEN9_PREEMPT_GPGPU_COMMAND_LEVEL); - - /* WaEnablePreemptionGranularityControlByUMD:cnl */ - I915_WRITE(GEN7_FF_SLICE_CS_CHICKEN1, - _MASKED_BIT_ENABLE(GEN9_FFSC_PERCTX_PREEMPT_CTRL)); - ret= wa_ring_whitelist_reg(engine, GEN8_CS_CHICKEN1); - if (ret) - return ret; - return 0; } -static int kbl_init_workarounds(struct intel_engine_cs *engine) +static int kbl_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; int ret; - ret = gen9_init_workarounds(engine); + ret = gen9_ctx_workarounds_init(dev_priv); if (ret) return ret; - /* WaEnableGapsTsvCreditFix:kbl */ - I915_WRITE(GEN8_GARBCNTL, (I915_READ(GEN8_GARBCNTL) | - GEN9_GAPS_TSV_CREDIT_DISABLE)); - - /* WaDisableDynamicCreditSharing:kbl */ - if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_B0)) - I915_WRITE(GAMT_CHKN_BIT_REG, - (I915_READ(GAMT_CHKN_BIT_REG) | - GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING)); - /* WaDisableFenceDestinationToSLM:kbl (pre-prod) */ if (IS_KBL_REVID(dev_priv, KBL_REVID_A0, KBL_REVID_A0)) WA_SET_BIT_MASKED(HDC_CHICKEN0, @@ -566,34 +366,19 @@ static int kbl_init_workarounds(struct intel_engine_cs *engine) WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2, GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION); - /* WaDisableGafsUnitClkGating:kbl */ - I915_WRITE(GEN7_UCGCTL4, (I915_READ(GEN7_UCGCTL4) | - GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE)); - /* WaDisableSbeCacheDispatchPortSharing:kbl */ WA_SET_BIT_MASKED( GEN7_HALF_SLICE_CHICKEN1, GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE); - /* WaInPlaceDecompressionHang:kbl */ - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, - (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); - - /* WaDisableLSQCROPERFforOCL:kbl */ - ret = wa_ring_whitelist_reg(engine, GEN8_L3SQCREG4); - if (ret) - return ret; - return 0; } -static int glk_init_workarounds(struct intel_engine_cs *engine) +static int glk_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; int ret; - ret = gen9_init_workarounds(engine); + ret = gen9_ctx_workarounds_init(dev_priv); if (ret) return ret; @@ -604,77 +389,98 @@ static int glk_init_workarounds(struct intel_engine_cs *engine) return 0; } -static int cfl_init_workarounds(struct intel_engine_cs *engine) +static int cfl_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; int ret; - ret = gen9_init_workarounds(engine); + ret = gen9_ctx_workarounds_init(dev_priv); if (ret) return ret; - /* WaEnableGapsTsvCreditFix:cfl */ - I915_WRITE(GEN8_GARBCNTL, (I915_READ(GEN8_GARBCNTL) | - GEN9_GAPS_TSV_CREDIT_DISABLE)); - /* WaToEnableHwFixForPushConstHWBug:cfl */ WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2, GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION); - /* WaDisableGafsUnitClkGating:cfl */ - I915_WRITE(GEN7_UCGCTL4, (I915_READ(GEN7_UCGCTL4) | - GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE)); - /* WaDisableSbeCacheDispatchPortSharing:cfl */ WA_SET_BIT_MASKED( GEN7_HALF_SLICE_CHICKEN1, GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE); - /* WaInPlaceDecompressionHang:cfl */ - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, - (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); - return 0; } -int init_workarounds_ring(struct intel_engine_cs *engine) +static int cnl_ctx_workarounds_init(struct drm_i915_private *dev_priv) { - struct drm_i915_private *dev_priv = engine->i915; - int err; - - WARN_ON(engine->id != RCS); + /* WaForceContextSaveRestoreNonCoherent:cnl */ + WA_SET_BIT_MASKED(CNL_HDC_CHICKEN0, + HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT); - dev_priv->workarounds.count = 0; - dev_priv->workarounds.hw_whitelist_count[engine->id] = 0; + /* WaThrottleEUPerfToAvoidTDBackPressure:cnl(pre-prod) */ + if (IS_CNL_REVID(dev_priv, CNL_REVID_B0, CNL_REVID_B0)) + WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN, THROTTLE_12_5); - if (IS_BROADWELL(dev_priv)) - err = bdw_init_workarounds(engine); - else if (IS_CHERRYVIEW(dev_priv)) - err = chv_init_workarounds(engine); - else if (IS_SKYLAKE(dev_priv)) - err = skl_init_workarounds(engine); - else if (IS_BROXTON(dev_priv)) - err = bxt_init_workarounds(engine); - else if (IS_KABYLAKE(dev_priv)) - err = kbl_init_workarounds(engine); - else if (IS_GEMINILAKE(dev_priv)) - err = glk_init_workarounds(engine); - else if (IS_COFFEELAKE(dev_priv)) - err = cfl_init_workarounds(engine); - else if (IS_CANNONLAKE(dev_priv)) - err = cnl_init_workarounds(engine); - else - err = 0; - if (err) - return err; + /* WaDisableReplayBufferBankArbitrationOptimization:cnl */ + WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2, + GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION); - DRM_DEBUG_DRIVER("%s: Number of context specific w/a: %d\n", - engine->name, dev_priv->workarounds.count); - return 0; -} + /* WaDisableEnhancedSBEVertexCaching:cnl (pre-prod) */ + if (IS_CNL_REVID(dev_priv, 0, CNL_REVID_B0)) + WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2, + GEN8_CSC2_SBE_VUE_CACHE_CONSERVATIVE); -int intel_ring_workarounds_emit(struct drm_i915_gem_request *req) + /* WaPushConstantDereferenceHoldDisable:cnl */ + WA_SET_BIT_MASKED(GEN7_ROW_CHICKEN2, PUSH_CONSTANT_DEREF_DISABLE); + + /* FtrEnableFastAnisoL1BankingFix:cnl */ + WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN3, CNL_FAST_ANISO_L1_BANKING_FIX); + + /* WaDisable3DMidCmdPreemption:cnl */ + WA_CLR_BIT_MASKED(GEN8_CS_CHICKEN1, GEN9_PREEMPT_3D_OBJECT_LEVEL); + + /* WaDisableGPGPUMidCmdPreemption:cnl */ + WA_SET_FIELD_MASKED(GEN8_CS_CHICKEN1, GEN9_PREEMPT_GPGPU_LEVEL_MASK, + GEN9_PREEMPT_GPGPU_COMMAND_LEVEL); + + return 0; +} + +int intel_ctx_workarounds_init(struct drm_i915_private *dev_priv) +{ + int err; + + dev_priv->workarounds.count = 0; + + if (INTEL_GEN(dev_priv) < 8) + err = 0; + else if (IS_BROADWELL(dev_priv)) + err = bdw_ctx_workarounds_init(dev_priv); + else if (IS_CHERRYVIEW(dev_priv)) + err = chv_ctx_workarounds_init(dev_priv); + else if (IS_SKYLAKE(dev_priv)) + err = skl_ctx_workarounds_init(dev_priv); + else if (IS_BROXTON(dev_priv)) + err = bxt_ctx_workarounds_init(dev_priv); + else if (IS_KABYLAKE(dev_priv)) + err = kbl_ctx_workarounds_init(dev_priv); + else if (IS_GEMINILAKE(dev_priv)) + err = glk_ctx_workarounds_init(dev_priv); + else if (IS_COFFEELAKE(dev_priv)) + err = cfl_ctx_workarounds_init(dev_priv); + else if (IS_CANNONLAKE(dev_priv)) + err = cnl_ctx_workarounds_init(dev_priv); + else { + MISSING_CASE(INTEL_GEN(dev_priv)); + err = 0; + } + if (err) + return err; + + DRM_DEBUG_DRIVER("Number of context specific w/a: %d\n", + dev_priv->workarounds.count); + return 0; +} + +int intel_ctx_workarounds_emit(struct drm_i915_gem_request *req) { struct i915_workarounds *w = &req->i915->workarounds; u32 *cs; @@ -706,3 +512,353 @@ int intel_ring_workarounds_emit(struct drm_i915_gem_request *req) return 0; } + +static void bdw_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ +} + +static void chv_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ +} + +static void gen9_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ + if (HAS_LLC(dev_priv)) { + /* WaCompressedResourceSamplerPbeMediaNewHashMode:skl,kbl + * + * Must match Display Engine. See + * WaCompressedResourceDisplayNewHashMode. + */ + I915_WRITE(MMCD_MISC_CTRL, + I915_READ(MMCD_MISC_CTRL) | + MMCD_PCLA | + MMCD_HOTSPOT_EN); + } + + /* WaContextSwitchWithConcurrentTLBInvalidate:skl,bxt,kbl,glk,cfl */ + I915_WRITE(GEN9_CSFE_CHICKEN1_RCS, + _MASKED_BIT_ENABLE(GEN9_PREEMPT_GPGPU_SYNC_SWITCH_DISABLE)); + + /* WaEnableLbsSlaRetryTimerDecrement:skl,bxt,kbl,glk,cfl */ + I915_WRITE(BDW_SCRATCH1, I915_READ(BDW_SCRATCH1) | + GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE); + + /* WaDisableKillLogic:bxt,skl,kbl */ + if (!IS_COFFEELAKE(dev_priv)) + I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | + ECOCHK_DIS_TLB); + + /* WaDisableHDCInvalidation:skl,bxt,kbl,cfl */ + I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | + BDW_DISABLE_HDC_INVALIDATION); + + /* WaOCLCoherentLineFlush:skl,bxt,kbl,cfl */ + I915_WRITE(GEN8_L3SQCREG4, (I915_READ(GEN8_L3SQCREG4) | + GEN8_LQSC_FLUSH_COHERENT_LINES)); + + /* WaEnablePreemptionGranularityControlByUMD:skl,bxt,kbl,cfl,[cnl] */ + I915_WRITE(GEN7_FF_SLICE_CS_CHICKEN1, + _MASKED_BIT_ENABLE(GEN9_FFSC_PERCTX_PREEMPT_CTRL)); +} + +static void skl_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ + gen9_gt_workarounds_apply(dev_priv); + + /* WaEnableGapsTsvCreditFix:skl */ + I915_WRITE(GEN8_GARBCNTL, (I915_READ(GEN8_GARBCNTL) | + GEN9_GAPS_TSV_CREDIT_DISABLE)); + + /* WaDisableGafsUnitClkGating:skl */ + I915_WRITE(GEN7_UCGCTL4, (I915_READ(GEN7_UCGCTL4) | + GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE)); + + /* WaInPlaceDecompressionHang:skl */ + if (IS_SKL_REVID(dev_priv, SKL_REVID_H0, REVID_FOREVER)) + I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, + (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); +} + +static void bxt_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ + gen9_gt_workarounds_apply(dev_priv); + + /* WaStoreMultiplePTEenable:bxt */ + /* This is a requirement according to Hardware specification */ + if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) + I915_WRITE(TILECTL, I915_READ(TILECTL) | TILECTL_TLBPF); + + /* WaSetClckGatingDisableMedia:bxt */ + if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) { + I915_WRITE(GEN7_MISCCPCTL, (I915_READ(GEN7_MISCCPCTL) & + ~GEN8_DOP_CLOCK_GATE_MEDIA_ENABLE)); + } + + /* WaDisablePooledEuLoadBalancingFix:bxt */ + if (IS_BXT_REVID(dev_priv, BXT_REVID_B0, REVID_FOREVER)) { + I915_WRITE(FF_SLICE_CS_CHICKEN2, + _MASKED_BIT_ENABLE(GEN9_POOLED_EU_LOAD_BALANCING_FIX_DISABLE)); + } + + /* WaProgramL3SqcReg1DefaultForPerf:bxt */ + if (IS_BXT_REVID(dev_priv, BXT_REVID_B0, REVID_FOREVER)) { + u32 val = I915_READ(GEN8_L3SQCREG1); + val &= ~L3_PRIO_CREDITS_MASK; + val |= L3_GENERAL_PRIO_CREDITS(62) | L3_HIGH_PRIO_CREDITS(2); + I915_WRITE(GEN8_L3SQCREG1, val); + } + + /* WaInPlaceDecompressionHang:bxt */ + if (IS_BXT_REVID(dev_priv, BXT_REVID_C0, REVID_FOREVER)) + I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, + (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); +} + +static void kbl_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ + gen9_gt_workarounds_apply(dev_priv); + + /* WaEnableGapsTsvCreditFix:kbl */ + I915_WRITE(GEN8_GARBCNTL, (I915_READ(GEN8_GARBCNTL) | + GEN9_GAPS_TSV_CREDIT_DISABLE)); + + /* WaDisableDynamicCreditSharing:kbl */ + if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_B0)) + I915_WRITE(GAMT_CHKN_BIT_REG, + (I915_READ(GAMT_CHKN_BIT_REG) | + GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING)); + + /* WaDisableGafsUnitClkGating:kbl */ + I915_WRITE(GEN7_UCGCTL4, (I915_READ(GEN7_UCGCTL4) | + GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE)); + + /* WaInPlaceDecompressionHang:kbl */ + I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, + (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); +} + +static void glk_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ + gen9_gt_workarounds_apply(dev_priv); +} + +static void cfl_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ + gen9_gt_workarounds_apply(dev_priv); + + /* WaEnableGapsTsvCreditFix:cfl */ + I915_WRITE(GEN8_GARBCNTL, (I915_READ(GEN8_GARBCNTL) | + GEN9_GAPS_TSV_CREDIT_DISABLE)); + + /* WaDisableGafsUnitClkGating:cfl */ + I915_WRITE(GEN7_UCGCTL4, (I915_READ(GEN7_UCGCTL4) | + GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE)); + + /* WaInPlaceDecompressionHang:cfl */ + I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, + (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); +} + +static void cnl_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ + /* WaDisableI2mCycleOnWRPort:cnl (pre-prod) */ + if (IS_CNL_REVID(dev_priv, CNL_REVID_B0, CNL_REVID_B0)) + I915_WRITE(GAMT_CHKN_BIT_REG, + (I915_READ(GAMT_CHKN_BIT_REG) | + GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT)); + + /* WaInPlaceDecompressionHang:cnl */ + I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, + (I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS)); + + /* WaEnablePreemptionGranularityControlByUMD:cnl */ + I915_WRITE(GEN7_FF_SLICE_CS_CHICKEN1, + _MASKED_BIT_ENABLE(GEN9_FFSC_PERCTX_PREEMPT_CTRL)); +} + +void intel_gt_workarounds_apply(struct drm_i915_private *dev_priv) +{ + if (INTEL_GEN(dev_priv) < 8) + return; + else if (IS_BROADWELL(dev_priv)) + bdw_gt_workarounds_apply(dev_priv); + else if (IS_CHERRYVIEW(dev_priv)) + chv_gt_workarounds_apply(dev_priv); + else if (IS_SKYLAKE(dev_priv)) + skl_gt_workarounds_apply(dev_priv); + else if (IS_BROXTON(dev_priv)) + bxt_gt_workarounds_apply(dev_priv); + else if (IS_KABYLAKE(dev_priv)) + kbl_gt_workarounds_apply(dev_priv); + else if (IS_GEMINILAKE(dev_priv)) + glk_gt_workarounds_apply(dev_priv); + else if (IS_COFFEELAKE(dev_priv)) + cfl_gt_workarounds_apply(dev_priv); + else if (IS_CANNONLAKE(dev_priv)) + cnl_gt_workarounds_apply(dev_priv); + else + MISSING_CASE(INTEL_GEN(dev_priv)); +} + +static int wa_ring_whitelist_reg(struct intel_engine_cs *engine, + i915_reg_t reg) +{ + struct drm_i915_private *dev_priv = engine->i915; + struct i915_workarounds *wa = &dev_priv->workarounds; + const uint32_t index = wa->hw_whitelist_count[engine->id]; + + if (WARN_ON(index >= RING_MAX_NONPRIV_SLOTS)) + return -EINVAL; + + I915_WRITE(RING_FORCE_TO_NONPRIV(engine->mmio_base, index), + i915_mmio_reg_offset(reg)); + wa->hw_whitelist_count[engine->id]++; + + return 0; +} + +static int gen9_whitelist_workarounds_apply(struct intel_engine_cs *engine) +{ + int ret; + + /* WaVFEStateAfterPipeControlwithMediaStateClear:skl,bxt,glk,cfl */ + ret = wa_ring_whitelist_reg(engine, GEN9_CTX_PREEMPT_REG); + if (ret) + return ret; + + /* WaEnablePreemptionGranularityControlByUMD:skl,bxt,kbl,cfl,[cnl] */ + ret = wa_ring_whitelist_reg(engine, GEN8_CS_CHICKEN1); + if (ret) + return ret; + + /* WaAllowUMDToModifyHDCChicken1:skl,bxt,kbl,glk,cfl */ + ret = wa_ring_whitelist_reg(engine, GEN8_HDC_CHICKEN1); + if (ret) + return ret; + + return 0; +} + +static int skl_whitelist_workarounds_apply(struct intel_engine_cs *engine) +{ + int ret = gen9_whitelist_workarounds_apply(engine); + if (ret) + return ret; + + /* WaDisableLSQCROPERFforOCL:skl */ + ret = wa_ring_whitelist_reg(engine, GEN8_L3SQCREG4); + if (ret) + return ret; + + return 0; +} + +static int bxt_whitelist_workarounds_apply(struct intel_engine_cs *engine) +{ + struct drm_i915_private *dev_priv = engine->i915; + + int ret = gen9_whitelist_workarounds_apply(engine); + if (ret) + return ret; + + /* WaDisableObjectLevelPreemptionForTrifanOrPolygon:bxt */ + /* WaDisableObjectLevelPreemptionForInstancedDraw:bxt */ + /* WaDisableObjectLevelPreemtionForInstanceId:bxt */ + /* WaDisableLSQCROPERFforOCL:bxt */ + if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) { + ret = wa_ring_whitelist_reg(engine, GEN9_CS_DEBUG_MODE1); + if (ret) + return ret; + + ret = wa_ring_whitelist_reg(engine, GEN8_L3SQCREG4); + if (ret) + return ret; + } + + return 0; +} + +static int kbl_whitelist_workarounds_apply(struct intel_engine_cs *engine) +{ + int ret = gen9_whitelist_workarounds_apply(engine); + if (ret) + return ret; + + /* WaDisableLSQCROPERFforOCL:kbl */ + ret = wa_ring_whitelist_reg(engine, GEN8_L3SQCREG4); + if (ret) + return ret; + + return 0; +} + +static int glk_whitelist_workarounds_apply(struct intel_engine_cs *engine) +{ + int ret = gen9_whitelist_workarounds_apply(engine); + if (ret) + return ret; + + return 0; +} + +static int cfl_whitelist_workarounds_apply(struct intel_engine_cs *engine) +{ + int ret = gen9_whitelist_workarounds_apply(engine); + if (ret) + return ret; + + return 0; +} + +static int cnl_whitelist_workarounds_apply(struct intel_engine_cs *engine) +{ + int ret; + + /* WaEnablePreemptionGranularityControlByUMD:cnl */ + ret = wa_ring_whitelist_reg(engine, GEN8_CS_CHICKEN1); + if (ret) + return ret; + + return 0; +} + +int intel_whitelist_workarounds_apply(struct intel_engine_cs *engine) +{ + struct drm_i915_private *dev_priv = engine->i915; + int err; + + WARN_ON(engine->id != RCS); + + dev_priv->workarounds.hw_whitelist_count[engine->id] = 0; + + if (INTEL_GEN(dev_priv) < 9) { + WARN(1, "No whitelisting in Gen%u\n", INTEL_GEN(dev_priv)); + err = 0; + } else if (IS_SKYLAKE(dev_priv)) + err = skl_whitelist_workarounds_apply(engine); + else if (IS_BROXTON(dev_priv)) + err = bxt_whitelist_workarounds_apply(engine); + else if (IS_KABYLAKE(dev_priv)) + err = kbl_whitelist_workarounds_apply(engine); + else if (IS_GEMINILAKE(dev_priv)) + err = glk_whitelist_workarounds_apply(engine); + else if (IS_COFFEELAKE(dev_priv)) + err = cfl_whitelist_workarounds_apply(engine); + else if (IS_CANNONLAKE(dev_priv)) + err = cnl_whitelist_workarounds_apply(engine); + else { + MISSING_CASE(INTEL_GEN(dev_priv)); + err = 0; + } + if (err) + return err; + + DRM_DEBUG_DRIVER("%s: Number of whitelist w/a: %d\n", engine->name, + dev_priv->workarounds.hw_whitelist_count[engine->id]); + return 0; +} diff --git a/drivers/gpu/drm/i915/intel_workarounds.h b/drivers/gpu/drm/i915/intel_workarounds.h index 27099fc..bba51bb 100644 --- a/drivers/gpu/drm/i915/intel_workarounds.h +++ b/drivers/gpu/drm/i915/intel_workarounds.h @@ -25,7 +25,11 @@ #ifndef _I915_WORKAROUNDS_H_ #define _I915_WORKAROUNDS_H_ -int init_workarounds_ring(struct intel_engine_cs *engine); -int intel_ring_workarounds_emit(struct drm_i915_gem_request *req); +int intel_ctx_workarounds_init(struct drm_i915_private *dev_priv); +int intel_ctx_workarounds_emit(struct drm_i915_gem_request *req); + +void intel_gt_workarounds_apply(struct drm_i915_private *dev_priv); + +int intel_whitelist_workarounds_apply(struct intel_engine_cs *engine); #endif