From patchwork Tue Nov 6 21:51:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lucas De Marchi X-Patchwork-Id: 10671507 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AD2581709 for ; Tue, 6 Nov 2018 21:52:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C69E2AD4F for ; Tue, 6 Nov 2018 21:52:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7EF612AD58; Tue, 6 Nov 2018 21:52:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CF2192AD83 for ; Tue, 6 Nov 2018 21:51:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 62C116E446; Tue, 6 Nov 2018 21:51:45 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id E82FB6E442 for ; Tue, 6 Nov 2018 21:51:43 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Nov 2018 13:51:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,473,1534834800"; d="scan'208";a="278870277" Received: from ldmartin-desk.jf.intel.com ([10.24.10.115]) by fmsmga006.fm.intel.com with ESMTP; 06 Nov 2018 13:51:42 -0800 From: Lucas De Marchi To: intel-gfx@lists.freedesktop.org Date: Tue, 6 Nov 2018 13:51:21 -0800 Message-Id: <20181106215123.27568-6-lucas.demarchi@intel.com> X-Mailer: git-send-email 2.19.1.1.g56c4683e68 In-Reply-To: <20181106215123.27568-1-lucas.demarchi@intel.com> References: <20181106215123.27568-1-lucas.demarchi@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 5/7] drm/i915: replace gen checks using operators by GT_GEN/GT_GEN_RANGE X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rodrigo Vivi Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP By using only the mask turn the GT GEN checks to use always the same macros. This opens the possibility for the compiler to merge the bitfields and make it easier to extend for other gens. This patch has been generated in 2 steps due to my lack of knowledge on how to make the embedded python to work with Coccinelle. First the the following spatch: @gt@ expression E; constant C; @@ - INTEL_GEN(E) > C + GT_GEN_RANGE(E, C + ONE___FIXMEUP, GEN_FOREVER) @lt@ expression E; constant C; @@ - INTEL_GEN(E) < C + GT_GEN_RANGE(E, 0, C - ONE___FIXMEUP) @eq@ expression E; constant C; @@ - INTEL_GEN(E) == C + GT_GEN(E, C) @ge@ expression E; constant C; @@ - INTEL_GEN(E) >= C + GT_GEN_RANGE(E, C, GEN_FOREVER) @le@ expression E; constant C; @@ - INTEL_GEN(E) <= C + GT_GEN_RANGE(E, 0, C) Accompanied with the followup awk script due to my lack of knowledge on how to use it with Coccinelle only: awk -i inplace ' match($0, /([0-9]+) - ONE___FIXMEUP/, oldgen) { newgen = oldgen[0] - 1; gsub(/[0-9]+ - ONE___FIXMEUP/, newgen); } /.*/ { print $0 }' drivers/gpu/drm/i915/{*.[ch],*/*[.ch]} awk -i inplace ' match($0, /([0-9]+) \+ ONE___FIXMEUP/, oldgen) { newgen = oldgen[0] + 1; gsub(/[0-9]+ \+ ONE___FIXMEUP/, newgen); } /.*/ { print $0 }' drivers/gpu/drm/i915/{*.[ch],*/*[.ch]} Signed-off-by: Lucas De Marchi --- drivers/gpu/drm/i915/gvt/gtt.c | 4 +- drivers/gpu/drm/i915/gvt/handlers.c | 2 +- drivers/gpu/drm/i915/i915_debugfs.c | 107 ++++----- drivers/gpu/drm/i915/i915_drv.c | 26 +-- drivers/gpu/drm/i915/i915_drv.h | 18 +- drivers/gpu/drm/i915/i915_gem.c | 18 +- drivers/gpu/drm/i915/i915_gem_context.c | 2 +- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 2 +- drivers/gpu/drm/i915/i915_gem_fence_reg.c | 6 +- drivers/gpu/drm/i915/i915_gem_gtt.c | 26 +-- drivers/gpu/drm/i915/i915_gem_stolen.c | 6 +- drivers/gpu/drm/i915/i915_gem_tiling.c | 8 +- drivers/gpu/drm/i915/i915_gpu_error.c | 42 ++-- drivers/gpu/drm/i915/i915_irq.c | 88 +++---- drivers/gpu/drm/i915/i915_perf.c | 2 +- drivers/gpu/drm/i915/i915_pmu.c | 6 +- drivers/gpu/drm/i915/i915_reg.h | 4 +- drivers/gpu/drm/i915/i915_suspend.c | 12 +- drivers/gpu/drm/i915/i915_sysfs.c | 2 +- drivers/gpu/drm/i915/intel_atomic.c | 2 +- drivers/gpu/drm/i915/intel_audio.c | 2 +- drivers/gpu/drm/i915/intel_bios.c | 4 +- drivers/gpu/drm/i915/intel_cdclk.c | 8 +- drivers/gpu/drm/i915/intel_color.c | 4 +- drivers/gpu/drm/i915/intel_crt.c | 6 +- drivers/gpu/drm/i915/intel_ddi.c | 16 +- drivers/gpu/drm/i915/intel_device_info.c | 18 +- drivers/gpu/drm/i915/intel_display.c | 216 +++++++++--------- drivers/gpu/drm/i915/intel_dp.c | 32 +-- drivers/gpu/drm/i915/intel_dpll_mgr.c | 6 +- drivers/gpu/drm/i915/intel_drv.h | 2 +- drivers/gpu/drm/i915/intel_engine_cs.c | 28 +-- drivers/gpu/drm/i915/intel_fbc.c | 26 +-- drivers/gpu/drm/i915/intel_fifo_underrun.c | 2 +- drivers/gpu/drm/i915/intel_hangcheck.c | 2 +- drivers/gpu/drm/i915/intel_hdcp.c | 2 +- drivers/gpu/drm/i915/intel_hdmi.c | 14 +- drivers/gpu/drm/i915/intel_i2c.c | 2 +- drivers/gpu/drm/i915/intel_lrc.c | 20 +- drivers/gpu/drm/i915/intel_lvds.c | 10 +- drivers/gpu/drm/i915/intel_mocs.c | 2 +- drivers/gpu/drm/i915/intel_overlay.c | 2 +- drivers/gpu/drm/i915/intel_panel.c | 10 +- drivers/gpu/drm/i915/intel_pipe_crc.c | 4 +- drivers/gpu/drm/i915/intel_pm.c | 128 +++++------ drivers/gpu/drm/i915/intel_psr.c | 22 +- drivers/gpu/drm/i915/intel_ringbuffer.c | 32 +-- drivers/gpu/drm/i915/intel_runtime_pm.c | 10 +- drivers/gpu/drm/i915/intel_sdvo.c | 14 +- drivers/gpu/drm/i915/intel_sprite.c | 28 +-- drivers/gpu/drm/i915/intel_tv.c | 2 +- drivers/gpu/drm/i915/intel_uc.c | 2 +- drivers/gpu/drm/i915/intel_uncore.c | 30 +-- drivers/gpu/drm/i915/intel_wopcm.c | 2 +- drivers/gpu/drm/i915/intel_workarounds.c | 10 +- .../drm/i915/selftests/i915_gem_coherency.c | 4 +- .../gpu/drm/i915/selftests/i915_gem_context.c | 8 +- .../gpu/drm/i915/selftests/i915_gem_object.c | 12 +- .../gpu/drm/i915/selftests/intel_hangcheck.c | 8 +- drivers/gpu/drm/i915/selftests/intel_lrc.c | 2 +- drivers/gpu/drm/i915/selftests/intel_uncore.c | 2 +- .../drm/i915/selftests/intel_workarounds.c | 2 +- 62 files changed, 570 insertions(+), 569 deletions(-) diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c index 2402395a068d..bc79c154391d 100644 --- a/drivers/gpu/drm/i915/gvt/gtt.c +++ b/drivers/gpu/drm/i915/gvt/gtt.c @@ -1025,12 +1025,12 @@ static bool vgpu_ips_enabled(struct intel_vgpu *vgpu) { struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; - if (INTEL_GEN(dev_priv) == 9 || INTEL_GEN(dev_priv) == 10) { + if (GT_GEN(dev_priv, 9) || GT_GEN(dev_priv, 10)) { u32 ips = vgpu_vreg_t(vgpu, GEN8_GAMW_ECO_DEV_RW_IA) & GAMW_ECO_ENABLE_64K_IPS_FIELD; return ips == GAMW_ECO_ENABLE_64K_IPS_FIELD; - } else if (INTEL_GEN(dev_priv) >= 11) { + } else if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { /* 64K paging only controlled by IPS bit in PTE now. */ return true; } else diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c index 90f50f67909a..0dec066dab5d 100644 --- a/drivers/gpu/drm/i915/gvt/handlers.c +++ b/drivers/gpu/drm/i915/gvt/handlers.c @@ -215,7 +215,7 @@ static int gamw_echo_dev_rw_ia_write(struct intel_vgpu *vgpu, { u32 ips = (*(u32 *)p_data) & GAMW_ECO_ENABLE_64K_IPS_FIELD; - if (INTEL_GEN(vgpu->gvt->dev_priv) <= 10) { + if (GT_GEN_RANGE(vgpu->gvt->dev_priv, 0, 10)) { if (ips == GAMW_ECO_ENABLE_64K_IPS_FIELD) gvt_dbg_core("vgpu%d: ips enabled\n", vgpu->id); else if (!ips) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 5e4a934c0dea..ec26076aaecc 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -762,7 +762,7 @@ static int i915_interrupt_info(struct seq_file *m, void *data) I915_READ(GEN8_PCU_IIR)); seq_printf(m, "PCU interrupt enable:\t%08x\n", I915_READ(GEN8_PCU_IER)); - } else if (INTEL_GEN(dev_priv) >= 11) { + } else if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { seq_printf(m, "Master Interrupt Control: %08x\n", I915_READ(GEN11_GFX_MSTR_IRQ)); @@ -783,7 +783,7 @@ static int i915_interrupt_info(struct seq_file *m, void *data) I915_READ(GEN11_DISPLAY_INT_CTL)); gen8_display_interrupt_info(m); - } else if (INTEL_GEN(dev_priv) >= 8) { + } else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { seq_printf(m, "Master Interrupt Control:\t%08x\n", I915_READ(GEN8_MASTER_IRQ)); @@ -879,7 +879,7 @@ static int i915_interrupt_info(struct seq_file *m, void *data) I915_READ(GTIMR)); } - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { seq_printf(m, "RCS Intr Mask:\t %08x\n", I915_READ(GEN11_RCS0_RSVD_INTR_MASK)); seq_printf(m, "BCS Intr Mask:\t %08x\n", @@ -899,7 +899,7 @@ static int i915_interrupt_info(struct seq_file *m, void *data) seq_printf(m, "Gunit/CSME Intr Mask:\t %08x\n", I915_READ(GEN11_GUNIT_CSME_INTR_MASK)); - } else if (INTEL_GEN(dev_priv) >= 6) { + } else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { for_each_engine(engine, dev_priv, id) { seq_printf(m, "Graphics Interrupt mask (%s): %08x\n", @@ -1111,7 +1111,7 @@ static int i915_frequency_info(struct seq_file *m, void *unused) "efficient (RPe) frequency: %d MHz\n", intel_gpu_freq(dev_priv, rps->efficient_freq)); mutex_unlock(&dev_priv->pcu_lock); - } else if (INTEL_GEN(dev_priv) >= 6) { + } else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { u32 rp_state_limits; u32 gt_perf_status; u32 rp_state_cap; @@ -1135,7 +1135,7 @@ static int i915_frequency_info(struct seq_file *m, void *unused) intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); reqf = I915_READ(GEN6_RPNSWREQ); - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) reqf >>= 23; else { reqf &= ~GEN6_TURBO_DISABLE; @@ -1162,7 +1162,7 @@ static int i915_frequency_info(struct seq_file *m, void *unused) intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { pm_ier = I915_READ(GEN11_GPM_WGBOXPERF_INTR_ENABLE); pm_imr = I915_READ(GEN11_GPM_WGBOXPERF_INTR_MASK); /* @@ -1171,7 +1171,7 @@ static int i915_frequency_info(struct seq_file *m, void *unused) */ pm_isr = 0; pm_iir = 0; - } else if (INTEL_GEN(dev_priv) >= 8) { + } else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { pm_ier = I915_READ(GEN8_GT_IER(2)); pm_imr = I915_READ(GEN8_GT_IMR(2)); pm_isr = I915_READ(GEN8_GT_ISR(2)); @@ -1194,14 +1194,14 @@ static int i915_frequency_info(struct seq_file *m, void *unused) seq_printf(m, "PM IER=0x%08x IMR=0x%08x, MASK=0x%08x\n", pm_ier, pm_imr, pm_mask); - if (INTEL_GEN(dev_priv) <= 10) + if (GT_GEN_RANGE(dev_priv, 0, 10)) seq_printf(m, "PM ISR=0x%08x IIR=0x%08x\n", pm_isr, pm_iir); seq_printf(m, "pm_intrmsk_mbz: 0x%08x\n", rps->pm_intrmsk_mbz); seq_printf(m, "GT_PERF_STATUS: 0x%08x\n", gt_perf_status); seq_printf(m, "Render p-state ratio: %d\n", - (gt_perf_status & (INTEL_GEN(dev_priv) >= 9 ? 0x1ff00 : 0xff00)) >> 8); + (gt_perf_status & (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) ? 0x1ff00 : 0xff00)) >> 8); seq_printf(m, "Render p-state VID: %d\n", gt_perf_status & 0xff); seq_printf(m, "Render p-state limit: %d\n", @@ -1233,20 +1233,20 @@ static int i915_frequency_info(struct seq_file *m, void *unused) max_freq = (GT_GEN9_LP(dev_priv) ? rp_state_cap >> 0 : rp_state_cap >> 16) & 0xff; max_freq *= (GT_GEN9_BC(dev_priv) || - INTEL_GEN(dev_priv) >= 10 ? GEN9_FREQ_SCALER : 1); + GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) ? GEN9_FREQ_SCALER : 1); seq_printf(m, "Lowest (RPN) frequency: %dMHz\n", intel_gpu_freq(dev_priv, max_freq)); max_freq = (rp_state_cap & 0xff00) >> 8; max_freq *= (GT_GEN9_BC(dev_priv) || - INTEL_GEN(dev_priv) >= 10 ? GEN9_FREQ_SCALER : 1); + GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) ? GEN9_FREQ_SCALER : 1); seq_printf(m, "Nominal (RP1) frequency: %dMHz\n", intel_gpu_freq(dev_priv, max_freq)); max_freq = (GT_GEN9_LP(dev_priv) ? rp_state_cap >> 16 : rp_state_cap >> 0) & 0xff; max_freq *= (GT_GEN9_BC(dev_priv) || - INTEL_GEN(dev_priv) >= 10 ? GEN9_FREQ_SCALER : 1); + GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) ? GEN9_FREQ_SCALER : 1); seq_printf(m, "Max non-overclocked (RP0) frequency: %dMHz\n", intel_gpu_freq(dev_priv, max_freq)); seq_printf(m, "Max overclocked frequency: %dMHz\n", @@ -1288,13 +1288,13 @@ static void i915_instdone_info(struct drm_i915_private *dev_priv, seq_printf(m, "\t\tINSTDONE: 0x%08x\n", instdone->instdone); - if (INTEL_GEN(dev_priv) <= 3) + if (GT_GEN_RANGE(dev_priv, 0, 3)) return; seq_printf(m, "\t\tSC_INSTDONE: 0x%08x\n", instdone->slice_common); - if (INTEL_GEN(dev_priv) <= 6) + if (GT_GEN_RANGE(dev_priv, 0, 6)) return; for_each_instdone_slice_subslice(dev_priv, slice, subslice) @@ -1535,12 +1535,12 @@ static int gen6_drpc_info(struct seq_file *m) trace_i915_reg_rw(false, GEN6_GT_CORE_STATUS, gt_core_status, 4, true); rcctl1 = I915_READ(GEN6_RC_CONTROL); - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { gen9_powergate_enable = I915_READ(GEN9_PG_ENABLE); gen9_powergate_status = I915_READ(GEN9_PWRGT_DOMAIN_STATUS); } - if (INTEL_GEN(dev_priv) <= 7) { + if (GT_GEN_RANGE(dev_priv, 0, 7)) { mutex_lock(&dev_priv->pcu_lock); sandybridge_pcode_read(dev_priv, GEN6_PCODE_READ_RC6VIDS, &rc6vids); @@ -1551,7 +1551,7 @@ static int gen6_drpc_info(struct seq_file *m) yesno(rcctl1 & GEN6_RC_CTL_RC1e_ENABLE)); seq_printf(m, "RC6 Enabled: %s\n", yesno(rcctl1 & GEN6_RC_CTL_RC6_ENABLE)); - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { seq_printf(m, "Render Well Gating Enabled: %s\n", yesno(gen9_powergate_enable & GEN9_RENDER_PG_ENABLE)); seq_printf(m, "Media Well Gating Enabled: %s\n", @@ -1585,7 +1585,7 @@ static int gen6_drpc_info(struct seq_file *m) seq_printf(m, "Core Power Down: %s\n", yesno(gt_core_status & GEN6_CORE_CPD_STATE_MASK)); - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { seq_printf(m, "Render Power Well: %s\n", (gen9_powergate_status & GEN9_PWRGT_RENDER_STATUS_MASK) ? "Up" : "Down"); @@ -1601,7 +1601,7 @@ static int gen6_drpc_info(struct seq_file *m) print_rc6_res(m, "RC6+ residency since boot:", GEN6_GT_GFX_RC6p); print_rc6_res(m, "RC6++ residency since boot:", GEN6_GT_GFX_RC6pp); - if (INTEL_GEN(dev_priv) <= 7) { + if (GT_GEN_RANGE(dev_priv, 0, 7)) { seq_printf(m, "RC6 voltage: %dmV\n", GEN6_DECODE_RC6_VID(((rc6vids >> 0) & 0xff))); seq_printf(m, "RC6+ voltage: %dmV\n", @@ -1622,7 +1622,7 @@ static int i915_drpc_info(struct seq_file *m, void *unused) if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) err = vlv_drpc_info(m); - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) err = gen6_drpc_info(m); else err = ironlake_drpc_info(m); @@ -1664,11 +1664,11 @@ static int i915_fbc_status(struct seq_file *m, void *unused) if (intel_fbc_is_active(dev_priv)) { u32 mask; - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) mask = I915_READ(IVB_FBC_STATUS2) & BDW_FBC_COMP_SEG_MASK; - else if (INTEL_GEN(dev_priv) >= 7) + else if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) mask = I915_READ(IVB_FBC_STATUS2) & IVB_FBC_COMP_SEG_MASK; - else if (INTEL_GEN(dev_priv) >= 5) + else if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) mask = I915_READ(ILK_DPFC_STATUS) & ILK_DPFC_COMP_SEG_MASK; else if (IS_G4X(dev_priv)) mask = I915_READ(DPFC_STATUS) & DPFC_COMP_SEG_MASK; @@ -1689,7 +1689,7 @@ static int i915_fbc_false_color_get(void *data, u64 *val) { struct drm_i915_private *dev_priv = data; - if (INTEL_GEN(dev_priv) < 7 || !HAS_FBC(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 6) || !HAS_FBC(dev_priv)) return -ENODEV; *val = dev_priv->fbc.false_color; @@ -1702,7 +1702,7 @@ static int i915_fbc_false_color_set(void *data, u64 val) struct drm_i915_private *dev_priv = data; u32 reg; - if (INTEL_GEN(dev_priv) < 7 || !HAS_FBC(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 6) || !HAS_FBC(dev_priv)) return -ENODEV; mutex_lock(&dev_priv->fbc.lock); @@ -1734,7 +1734,7 @@ static int i915_ips_status(struct seq_file *m, void *unused) seq_printf(m, "Enabled by kernel parameter: %s\n", yesno(i915_modparams.enable_ips)); - if (INTEL_GEN(dev_priv) >= 8) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { seq_puts(m, "Currently: unknown\n"); } else { if (I915_READ(IPS_CTL) & IPS_ENABLE) @@ -1756,7 +1756,7 @@ static int i915_sr_status(struct seq_file *m, void *unused) intel_runtime_pm_get(dev_priv); intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) /* no global SR status; inspect per-plane WM */; else if (HAS_PCH_SPLIT(dev_priv)) sr_enabled = I915_READ(WM1_LP_ILK) & WM1_LP_SR_EN; @@ -1824,7 +1824,7 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused) min_gpu_freq = rps->min_freq; max_gpu_freq = rps->max_freq; - if (GT_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) { + if (GT_GEN9_BC(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { /* Convert GT frequency to 50 HZ units */ min_gpu_freq /= GEN9_FREQ_SCALER; max_gpu_freq /= GEN9_FREQ_SCALER; @@ -1840,7 +1840,7 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused) seq_printf(m, "%d\t\t%d\t\t\t\t%d\n", intel_gpu_freq(dev_priv, (gpu_freq * (GT_GEN9_BC(dev_priv) || - INTEL_GEN(dev_priv) >= 10 ? + GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) ? GEN9_FREQ_SCALER : 1))), ((ia_freq >> 0) & 0xff) * 100, ((ia_freq >> 8) & 0xff) * 100); @@ -2039,7 +2039,7 @@ static int i915_swizzle_info(struct seq_file *m, void *data) I915_READ16(C0DRB3)); seq_printf(m, "C1DRB3 = 0x%04x\n", I915_READ16(C1DRB3)); - } else if (INTEL_GEN(dev_priv) >= 6) { + } else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { seq_printf(m, "MAD_DIMM_C0 = 0x%08x\n", I915_READ(MAD_DIMM_C0)); seq_printf(m, "MAD_DIMM_C1 = 0x%08x\n", @@ -2048,7 +2048,7 @@ static int i915_swizzle_info(struct seq_file *m, void *data) I915_READ(MAD_DIMM_C2)); seq_printf(m, "TILECTL = 0x%08x\n", I915_READ(TILECTL)); - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) seq_printf(m, "GAMTARBMODE = 0x%08x\n", I915_READ(GAMTARBMODE)); else @@ -2156,9 +2156,9 @@ static int i915_ppgtt_info(struct seq_file *m, void *data) intel_runtime_pm_get(dev_priv); - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) gen8_ppgtt_info(m, dev_priv); - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) gen6_ppgtt_info(m, dev_priv); list_for_each_entry_reverse(file, &dev->filelist, lhead) { @@ -2269,7 +2269,7 @@ static int i915_rps_boost_info(struct seq_file *m, void *data) atomic_read(&rps->boosts)); mutex_unlock(&dev->filelist_mutex); - if (INTEL_GEN(dev_priv) >= 6 && + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER) && rps->enabled && dev_priv->gt.active_requests) { u32 rpup, rpupei; @@ -2300,7 +2300,8 @@ static int i915_rps_boost_info(struct seq_file *m, void *data) static int i915_llc(struct seq_file *m, void *data) { struct drm_i915_private *dev_priv = node_to_i915(m->private); - const bool edram = INTEL_GEN(dev_priv) > 8; + const bool edram = GT_GEN_RANGE(dev_priv, 9, + GEN_FOREVER); seq_printf(m, "LLC: %s\n", yesno(HAS_LLC(dev_priv))); seq_printf(m, "%s: %lluMB\n", edram ? "eDRAM" : "eLLC", @@ -2821,7 +2822,7 @@ static int i915_energy_uJ(struct seq_file *m, void *data) unsigned long long power; u32 units; - if (INTEL_GEN(dev_priv) < 6) + if (GT_GEN_RANGE(dev_priv, 0, 5)) return -ENODEV; intel_runtime_pm_get(dev_priv); @@ -2916,7 +2917,7 @@ static int i915_dmc_info(struct seq_file *m, void *unused) seq_printf(m, "version: %d.%d\n", CSR_VERSION_MAJOR(csr->version), CSR_VERSION_MINOR(csr->version)); - if (WARN_ON(INTEL_GEN(dev_priv) > 11)) + if (WARN_ON(GT_GEN_RANGE(dev_priv, 12, GEN_FOREVER))) goto out; seq_printf(m, "DC3 -> DC5 count: %d\n", @@ -3442,7 +3443,7 @@ static int i915_ddb_info(struct seq_file *m, void *unused) enum pipe pipe; int plane; - if (INTEL_GEN(dev_priv) < 9) + if (GT_GEN_RANGE(dev_priv, 0, 8)) return -ENODEV; drm_modeset_lock_all(dev); @@ -3811,7 +3812,7 @@ static void wm_latency_show(struct seq_file *m, const uint16_t wm[8]) * - WM1+ latency values in 0.5us units * - latencies are in us on gen9/vlv/chv */ - if (INTEL_GEN(dev_priv) >= 9 || + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) || IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv) || IS_G4X(dev_priv)) @@ -3831,7 +3832,7 @@ static int pri_wm_latency_show(struct seq_file *m, void *data) struct drm_i915_private *dev_priv = m->private; const uint16_t *latencies; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) latencies = dev_priv->wm.skl_latency; else latencies = dev_priv->wm.pri_latency; @@ -3846,7 +3847,7 @@ static int spr_wm_latency_show(struct seq_file *m, void *data) struct drm_i915_private *dev_priv = m->private; const uint16_t *latencies; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) latencies = dev_priv->wm.skl_latency; else latencies = dev_priv->wm.spr_latency; @@ -3861,7 +3862,7 @@ static int cur_wm_latency_show(struct seq_file *m, void *data) struct drm_i915_private *dev_priv = m->private; const uint16_t *latencies; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) latencies = dev_priv->wm.skl_latency; else latencies = dev_priv->wm.cur_latency; @@ -3875,7 +3876,7 @@ static int pri_wm_latency_open(struct inode *inode, struct file *file) { struct drm_i915_private *dev_priv = inode->i_private; - if (INTEL_GEN(dev_priv) < 5 && !IS_G4X(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv)) return -ENODEV; return single_open(file, pri_wm_latency_show, dev_priv); @@ -3954,7 +3955,7 @@ static ssize_t pri_wm_latency_write(struct file *file, const char __user *ubuf, struct drm_i915_private *dev_priv = m->private; uint16_t *latencies; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) latencies = dev_priv->wm.skl_latency; else latencies = dev_priv->wm.pri_latency; @@ -3969,7 +3970,7 @@ static ssize_t spr_wm_latency_write(struct file *file, const char __user *ubuf, struct drm_i915_private *dev_priv = m->private; uint16_t *latencies; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) latencies = dev_priv->wm.skl_latency; else latencies = dev_priv->wm.spr_latency; @@ -3984,7 +3985,7 @@ static ssize_t cur_wm_latency_write(struct file *file, const char __user *ubuf, struct drm_i915_private *dev_priv = m->private; uint16_t *latencies; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) latencies = dev_priv->wm.skl_latency; else latencies = dev_priv->wm.cur_latency; @@ -4141,7 +4142,7 @@ i915_ring_test_irq_set(void *data, u64 val) * From icl, we can no longer individually mask interrupt generation * from each engine. */ - if (INTEL_GEN(i915) >= 11) + if (GT_GEN_RANGE(i915, 11, GEN_FOREVER)) return -ENODEV; val &= INTEL_INFO(i915)->ring_mask; @@ -4519,7 +4520,7 @@ static int i915_sseu_status(struct seq_file *m, void *unused) struct drm_i915_private *dev_priv = node_to_i915(m->private); struct sseu_dev_info sseu; - if (INTEL_GEN(dev_priv) < 8) + if (GT_GEN_RANGE(dev_priv, 0, 7)) return -ENODEV; seq_puts(m, "SSEU Device Info\n"); @@ -4540,7 +4541,7 @@ static int i915_sseu_status(struct seq_file *m, void *unused) broadwell_sseu_device_status(dev_priv, &sseu); } else if (GT_GEN(dev_priv, 9)) { gen9_sseu_device_status(dev_priv, &sseu); - } else if (INTEL_GEN(dev_priv) >= 10) { + } else if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { gen10_sseu_device_status(dev_priv, &sseu); } @@ -4555,7 +4556,7 @@ static int i915_forcewake_open(struct inode *inode, struct file *file) { struct drm_i915_private *i915 = inode->i_private; - if (INTEL_GEN(i915) < 6) + if (GT_GEN_RANGE(i915, 0, 5)) return 0; intel_runtime_pm_get(i915); @@ -4568,7 +4569,7 @@ static int i915_forcewake_release(struct inode *inode, struct file *file) { struct drm_i915_private *i915 = inode->i_private; - if (INTEL_GEN(i915) < 6) + if (GT_GEN_RANGE(i915, 0, 5)) return 0; intel_uncore_forcewake_user_put(i915); @@ -4664,7 +4665,7 @@ static int i915_drrs_ctl_set(void *data, u64 val) struct drm_device *dev = &dev_priv->drm; struct intel_crtc *crtc; - if (INTEL_GEN(dev_priv) < 7) + if (GT_GEN_RANGE(dev_priv, 0, 6)) return -ENODEV; for_each_intel_crtc(dev, crtc) { diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 8ff2acdec1f5..226030c0787f 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -472,12 +472,12 @@ static int i915_get_bridge_dev(struct drm_i915_private *dev_priv) static int intel_alloc_mchbar_resource(struct drm_i915_private *dev_priv) { - int reg = INTEL_GEN(dev_priv) >= 4 ? MCHBAR_I965 : MCHBAR_I915; + int reg = GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER) ? MCHBAR_I965 : MCHBAR_I915; u32 temp_lo, temp_hi = 0; u64 mchbar_addr; int ret; - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) pci_read_config_dword(dev_priv->bridge_dev, reg + 4, &temp_hi); pci_read_config_dword(dev_priv->bridge_dev, reg, &temp_lo); mchbar_addr = ((u64)temp_hi << 32) | temp_lo; @@ -504,7 +504,7 @@ intel_alloc_mchbar_resource(struct drm_i915_private *dev_priv) return ret; } - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) pci_write_config_dword(dev_priv->bridge_dev, reg + 4, upper_32_bits(dev_priv->mch_res.start)); @@ -517,7 +517,7 @@ intel_alloc_mchbar_resource(struct drm_i915_private *dev_priv) static void intel_setup_mchbar(struct drm_i915_private *dev_priv) { - int mchbar_reg = INTEL_GEN(dev_priv) >= 4 ? MCHBAR_I965 : MCHBAR_I915; + int mchbar_reg = GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER) ? MCHBAR_I965 : MCHBAR_I915; u32 temp; bool enabled; @@ -556,7 +556,7 @@ intel_setup_mchbar(struct drm_i915_private *dev_priv) static void intel_teardown_mchbar(struct drm_i915_private *dev_priv) { - int mchbar_reg = INTEL_GEN(dev_priv) >= 4 ? MCHBAR_I965 : MCHBAR_I915; + int mchbar_reg = GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER) ? MCHBAR_I965 : MCHBAR_I915; if (dev_priv->mchbar_need_disable) { if (IS_I915G(dev_priv) || IS_I915GM(dev_priv)) { @@ -964,7 +964,7 @@ static int i915_mmio_setup(struct drm_i915_private *dev_priv) * the register BAR remains the same size for all the earlier * generations up to Ironlake. */ - if (INTEL_GEN(dev_priv) < 5) + if (GT_GEN_RANGE(dev_priv, 0, 4)) mmio_size = 512 * 1024; else mmio_size = 2 * 1024 * 1024; @@ -1324,7 +1324,7 @@ intel_get_dram_info(struct drm_i915_private *dev_priv) */ dram_info->is_16gb_dimm = !GT_GEN9_LP(dev_priv); - if (INTEL_GEN(dev_priv) < 9 || IS_GEMINILAKE(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 8) || IS_GEMINILAKE(dev_priv)) return; /* Need to calculate bandwidth only for Gen9 */ @@ -1464,7 +1464,7 @@ static int i915_driver_init_hw(struct drm_i915_private *dev_priv) * device. The kernel then disables that interrupt source and so * prevents the other device from working properly. */ - if (INTEL_GEN(dev_priv) >= 5) { + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { if (pci_enable_msi(pdev) < 0) DRM_DEBUG_DRIVER("can't enable MSI"); } @@ -1962,7 +1962,7 @@ static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation) get_suspend_mode(dev_priv, hibernation)); ret = 0; - if (INTEL_GEN(dev_priv) >= 11 || GT_GEN9_LP(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) || GT_GEN9_LP(dev_priv)) bxt_enable_dc9(dev_priv); else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) hsw_enable_pc8(dev_priv); @@ -1989,7 +1989,7 @@ static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation) * Fujitsu FSC S7110 * Acer Aspire 1830T */ - if (!(hibernation && INTEL_GEN(dev_priv) < 6)) + if (!(hibernation && GT_GEN_RANGE(dev_priv, 0, 5))) pci_set_power_state(pdev, PCI_D3hot); out: @@ -2152,7 +2152,7 @@ static int i915_drm_resume_early(struct drm_device *dev) intel_uncore_resume_early(dev_priv); - if (INTEL_GEN(dev_priv) >= 11 || GT_GEN9_LP(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) || GT_GEN9_LP(dev_priv)) { gen9_sanitize_dc_state(dev_priv); bxt_disable_dc9(dev_priv); } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { @@ -2919,7 +2919,7 @@ static int intel_runtime_suspend(struct device *kdev) intel_uncore_suspend(dev_priv); ret = 0; - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { icl_display_core_uninit(dev_priv); bxt_enable_dc9(dev_priv); } else if (GT_GEN9_LP(dev_priv)) { @@ -3007,7 +3007,7 @@ static int intel_runtime_resume(struct device *kdev) if (intel_uncore_unclaimed_mmio(dev_priv)) DRM_DEBUG_DRIVER("Unclaimed access during suspend, bios?\n"); - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { bxt_disable_dc9(dev_priv); icl_display_core_init(dev_priv, true); if (dev_priv->csr.dmc_payload) { diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 08c879f17b40..4440ac225441 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2600,8 +2600,8 @@ intel_info(const struct drm_i915_private *dev_priv) (IS_CANNONLAKE(dev_priv) || \ IS_SKL_GT3(dev_priv) || IS_SKL_GT4(dev_priv)) -#define HAS_GMBUS_IRQ(dev_priv) (INTEL_GEN(dev_priv) >= 4) -#define HAS_GMBUS_BURST_READ(dev_priv) (INTEL_GEN(dev_priv) >= 10 || \ +#define HAS_GMBUS_IRQ(dev_priv) (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) +#define HAS_GMBUS_BURST_READ(dev_priv) (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || \ IS_GEMINILAKE(dev_priv) || \ IS_KABYLAKE(dev_priv)) @@ -2614,9 +2614,9 @@ intel_info(const struct drm_i915_private *dev_priv) #define SUPPORTS_TV(dev_priv) ((dev_priv)->info.supports_tv) #define I915_HAS_HOTPLUG(dev_priv) ((dev_priv)->info.has_hotplug) -#define HAS_FW_BLC(dev_priv) (INTEL_GEN(dev_priv) > 2) +#define HAS_FW_BLC(dev_priv) (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER)) #define HAS_FBC(dev_priv) ((dev_priv)->info.has_fbc) -#define HAS_CUR_FBC(dev_priv) (!HAS_GMCH_DISPLAY(dev_priv) && INTEL_GEN(dev_priv) >= 7) +#define HAS_CUR_FBC(dev_priv) (!HAS_GMCH_DISPLAY(dev_priv) && GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) #define HAS_IPS(dev_priv) (IS_HSW_ULT(dev_priv) || IS_BROADWELL(dev_priv)) @@ -2698,7 +2698,7 @@ intel_info(const struct drm_i915_private *dev_priv) #define HAS_GMCH_DISPLAY(dev_priv) ((dev_priv)->info.has_gmch_display) -#define HAS_LSPCON(dev_priv) (INTEL_GEN(dev_priv) >= 9) +#define HAS_LSPCON(dev_priv) (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) /* DPF == dynamic parity feature */ #define HAS_L3_DPF(dev_priv) ((dev_priv)->info.has_l3_dpf) @@ -2721,7 +2721,7 @@ static inline bool intel_vtd_active(void) static inline bool intel_scanout_needs_vtd_wa(struct drm_i915_private *dev_priv) { - return INTEL_GEN(dev_priv) >= 6 && intel_vtd_active(); + return GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER) && intel_vtd_active(); } static inline bool @@ -3309,7 +3309,7 @@ void i915_gem_flush_ggtt_writes(struct drm_i915_private *dev_priv); static inline void i915_gem_chipset_flush(struct drm_i915_private *dev_priv) { wmb(); - if (INTEL_GEN(dev_priv) < 6) + if (GT_GEN_RANGE(dev_priv, 0, 5)) intel_gtt_chipset_flush(); } @@ -3688,7 +3688,7 @@ static inline i915_reg_t i915_vgacntrl_reg(struct drm_i915_private *dev_priv) { if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) return VLV_VGACNTRL; - else if (INTEL_GEN(dev_priv) >= 5) + else if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) return CPU_VGACNTRL; else return VGACNTRL; @@ -3848,7 +3848,7 @@ int remap_io_mapping(struct vm_area_struct *vma, static inline int intel_hws_csb_write_index(struct drm_i915_private *i915) { - if (INTEL_GEN(i915) >= 10) + if (GT_GEN_RANGE(i915, 10, GEN_FOREVER)) return CNL_HWS_CSB_WRITE_INDEX; else return I915_HWS_CSB_WRITE_INDEX; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index d50955f7fe3f..fef3e4f58c74 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -171,7 +171,7 @@ static u32 __i915_gem_park(struct drm_i915_private *i915) i915->gt.awake = false; - if (INTEL_GEN(i915) >= 6) + if (GT_GEN_RANGE(i915, 6, GEN_FOREVER)) gen6_rps_idle(i915); intel_display_power_put(i915, POWER_DOMAIN_GT_IRQ); @@ -226,7 +226,7 @@ void i915_gem_unpark(struct drm_i915_private *i915) intel_enable_gt_powersave(i915); i915_update_gfx_val(i915); - if (INTEL_GEN(i915) >= 6) + if (GT_GEN_RANGE(i915, 6, GEN_FOREVER)) gen6_rps_busy(i915); i915_pmu_gt_unparked(i915); @@ -489,7 +489,7 @@ i915_gem_object_wait_fence(struct dma_fence *fence, * each client to waitboost once in a busy period. */ if (rps_client && !i915_request_started(rq)) { - if (INTEL_GEN(rq->i915) >= 6) + if (GT_GEN_RANGE(rq->i915, 6, GEN_FOREVER)) gen6_rps_boost(rq, rps_client); } @@ -3338,7 +3338,7 @@ void i915_gem_set_wedged(struct drm_i915_private *i915) i915->caps.scheduler = 0; /* Even if the GPU reset fails, it should still stop the engines */ - if (INTEL_GEN(i915) >= 5) + if (GT_GEN_RANGE(i915, 5, GEN_FOREVER)) intel_gpu_reset(i915, ALL_ENGINES); /* @@ -5054,7 +5054,7 @@ void i915_gem_sanitize(struct drm_i915_private *i915) * of the reset, so this could be applied to even earlier gen. */ err = -ENODEV; - if (INTEL_GEN(i915) >= 5 && intel_has_gpu_reset(i915)) + if (GT_GEN_RANGE(i915, 5, GEN_FOREVER) && intel_has_gpu_reset(i915)) err = WARN_ON(intel_gpu_reset(i915, ALL_ENGINES)); if (!err) intel_engines_sanitize(i915); @@ -5216,7 +5216,7 @@ void i915_gem_resume(struct drm_i915_private *i915) void i915_gem_init_swizzling(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) < 5 || + if (GT_GEN_RANGE(dev_priv, 0, 4) || dev_priv->mm.bit_6_swizzle_x == I915_BIT_6_SWIZZLE_NONE) return; @@ -5290,7 +5290,7 @@ int i915_gem_init_hw(struct drm_i915_private *dev_priv) /* Double layer security blanket, see i915_gem_init() */ intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); - if (HAS_EDRAM(dev_priv) && INTEL_GEN(dev_priv) < 9) + if (HAS_EDRAM(dev_priv) && GT_GEN_RANGE(dev_priv, 0, 8)) I915_WRITE(HSW_IDICR, I915_READ(HSW_IDICR) | IDIHASHMSK(0xf)); if (IS_HASWELL(dev_priv)) @@ -5699,10 +5699,10 @@ i915_gem_load_init_fences(struct drm_i915_private *dev_priv) { int i; - if (INTEL_GEN(dev_priv) >= 7 && !IS_VALLEYVIEW(dev_priv) && + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER) && !IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv)) dev_priv->num_fence_regs = 32; - else if (INTEL_GEN(dev_priv) >= 4 || + else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER) || IS_I945G(dev_priv) || IS_I945GM(dev_priv) || IS_G33(dev_priv) || IS_PINEVIEW(dev_priv)) dev_priv->num_fence_regs = 16; diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 031a2a358f19..09cc74420409 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -121,7 +121,7 @@ static inline int new_hw_id(struct drm_i915_private *i915, gfp_t gfp) lockdep_assert_held(&i915->contexts.mutex); - if (INTEL_GEN(i915) >= 11) + if (GT_GEN_RANGE(i915, 11, GEN_FOREVER)) max = GEN11_MAX_CONTEXT_HW_ID; else if (USES_GUC_SUBMISSION(i915)) /* diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index f1df4114305a..9b936e5877fe 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -2518,7 +2518,7 @@ i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data, exec2_list[i].relocs_ptr = exec_list[i].relocs_ptr; exec2_list[i].alignment = exec_list[i].alignment; exec2_list[i].offset = exec_list[i].offset; - if (INTEL_GEN(to_i915(dev)) < 4) + if (GT_GEN_RANGE(to_i915(dev), 0, 3)) exec2_list[i].flags = EXEC_OBJECT_NEEDS_FENCE; else exec2_list[i].flags = 0; diff --git a/drivers/gpu/drm/i915/i915_gem_fence_reg.c b/drivers/gpu/drm/i915/i915_gem_fence_reg.c index caafbf7e62a4..3f45b94d9cc0 100644 --- a/drivers/gpu/drm/i915/i915_gem_fence_reg.c +++ b/drivers/gpu/drm/i915/i915_gem_fence_reg.c @@ -64,7 +64,7 @@ static void i965_write_fence_reg(struct drm_i915_fence_reg *fence, int fence_pitch_shift; u64 val; - if (INTEL_GEN(fence->i915) >= 6) { + if (GT_GEN_RANGE(fence->i915, 6, GEN_FOREVER)) { fence_reg_lo = FENCE_REG_GEN6_LO(fence->id); fence_reg_hi = FENCE_REG_GEN6_HI(fence->id); fence_pitch_shift = GEN6_FENCE_PITCH_SHIFT; @@ -557,7 +557,7 @@ i915_gem_detect_bit_6_swizzle(struct drm_i915_private *dev_priv) uint32_t swizzle_x = I915_BIT_6_SWIZZLE_UNKNOWN; uint32_t swizzle_y = I915_BIT_6_SWIZZLE_UNKNOWN; - if (INTEL_GEN(dev_priv) >= 8 || IS_VALLEYVIEW(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER) || IS_VALLEYVIEW(dev_priv)) { /* * On BDW+, swizzling is not used. We leave the CPU memory * controller in charge of optimizing memory accesses without @@ -567,7 +567,7 @@ i915_gem_detect_bit_6_swizzle(struct drm_i915_private *dev_priv) */ swizzle_x = I915_BIT_6_SWIZZLE_NONE; swizzle_y = I915_BIT_6_SWIZZLE_NONE; - } else if (INTEL_GEN(dev_priv) >= 6) { + } else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { if (dev_priv->preserve_bios_swizzle) { if (I915_READ(DISP_ARB_CTL) & DISP_TILE_SURFACE_SWIZZLING) { diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 4ae6166c6593..95001f074e72 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -2170,7 +2170,7 @@ static void gtt_write_workarounds(struct drm_i915_private *dev_priv) I915_WRITE(GEN8_L3_LRA_1_GPGPU, GEN8_L3_LRA_1_GPGPU_DEFAULT_VALUE_CHV); else if (GT_GEN9_LP(dev_priv)) I915_WRITE(GEN8_L3_LRA_1_GPGPU, GEN9_L3_LRA_1_GPGPU_DEFAULT_VALUE_BXT); - else if (INTEL_GEN(dev_priv) >= 9) + else if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) I915_WRITE(GEN8_L3_LRA_1_GPGPU, GEN9_L3_LRA_1_GPGPU_DEFAULT_VALUE_SKL); /* @@ -2185,7 +2185,7 @@ static void gtt_write_workarounds(struct drm_i915_private *dev_priv) * driver. */ if (HAS_PAGE_SIZES(dev_priv, I915_GTT_PAGE_SIZE_64K) && - INTEL_GEN(dev_priv) <= 10) + GT_GEN_RANGE(dev_priv, 0, 10)) I915_WRITE(GEN8_GAMW_ECO_DEV_RW_IA, I915_READ(GEN8_GAMW_ECO_DEV_RW_IA) | GAMW_ECO_ENABLE_64K_IPS_FIELD); @@ -2206,7 +2206,7 @@ int i915_ppgtt_init_hw(struct drm_i915_private *dev_priv) static struct i915_hw_ppgtt * __hw_ppgtt_create(struct drm_i915_private *i915) { - if (INTEL_GEN(i915) < 8) + if (GT_GEN_RANGE(i915, 0, 7)) return gen6_ppgtt_create(i915); else return gen8_ppgtt_create(i915); @@ -2335,9 +2335,9 @@ static void gen8_check_faults(struct drm_i915_private *dev_priv) void i915_check_and_clear_faults(struct drm_i915_private *dev_priv) { /* From GEN8 onwards we only have one 'All Engine Fault Register' */ - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) gen8_check_faults(dev_priv); - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) gen6_check_faults(dev_priv); else return; @@ -2352,7 +2352,7 @@ void i915_gem_suspend_gtt_mappings(struct drm_i915_private *dev_priv) /* Don't bother messing with faults pre GEN6 as we have little * documentation supporting that it's a good idea. */ - if (INTEL_GEN(dev_priv) < 6) + if (GT_GEN_RANGE(dev_priv, 0, 5)) return; i915_check_and_clear_faults(dev_priv); @@ -3002,7 +3002,7 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size) * resort to an uncached mapping. The WC issue is easily caught by the * readback check when writing GTT PTE entries. */ - if (GT_GEN9_LP(dev_priv) || INTEL_GEN(dev_priv) >= 10) + if (GT_GEN9_LP(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) ggtt->gsm = ioremap_nocache(phys_addr, size); else ggtt->gsm = ioremap_wc(phys_addr, size); @@ -3301,7 +3301,7 @@ static void setup_private_pat(struct drm_i915_private *dev_priv) ppat->i915 = dev_priv; - if (INTEL_GEN(dev_priv) >= 10) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) cnl_setup_private_ppat(ppat); else if (IS_CHERRYVIEW(dev_priv) || GT_GEN9_LP(dev_priv)) chv_setup_private_ppat(ppat); @@ -3420,7 +3420,7 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt) ggtt->vm.pte_encode = hsw_pte_encode; else if (IS_VALLEYVIEW(dev_priv)) ggtt->vm.pte_encode = byt_pte_encode; - else if (INTEL_GEN(dev_priv) >= 7) + else if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) ggtt->vm.pte_encode = ivb_pte_encode; else ggtt->vm.pte_encode = snb_pte_encode; @@ -3487,9 +3487,9 @@ int i915_ggtt_probe_hw(struct drm_i915_private *dev_priv) ggtt->vm.i915 = dev_priv; ggtt->vm.dma = &dev_priv->drm.pdev->dev; - if (INTEL_GEN(dev_priv) <= 5) + if (GT_GEN_RANGE(dev_priv, 0, 5)) ret = i915_gmch_probe(ggtt); - else if (INTEL_GEN(dev_priv) < 8) + else if (GT_GEN_RANGE(dev_priv, 0, 7)) ret = gen6_gmch_probe(ggtt); else ret = gen8_gmch_probe(ggtt); @@ -3588,7 +3588,7 @@ int i915_ggtt_init_hw(struct drm_i915_private *dev_priv) int i915_ggtt_enable_hw(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) < 6 && !intel_enable_gtt()) + if (GT_GEN_RANGE(dev_priv, 0, 5) && !intel_enable_gtt()) return -EIO; return 0; @@ -3650,7 +3650,7 @@ void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv) ggtt->vm.closed = false; i915_ggtt_invalidate(dev_priv); - if (INTEL_GEN(dev_priv) >= 8) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { struct intel_ppat *ppat = &dev_priv->ppat; bitmap_set(ppat->dirty, 0, ppat->max_entries); diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index d01dea84ffae..b346b874a82f 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -52,7 +52,7 @@ int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv, return -ENODEV; /* WaSkipStolenMemoryFirstPage:bdw+ */ - if (INTEL_GEN(dev_priv) >= 8 && start < 4096) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER) && start < 4096) start = 4096; mutex_lock(&dev_priv->mm.stolen_lock); @@ -95,7 +95,7 @@ static int i915_adjust_stolen(struct drm_i915_private *dev_priv, */ /* Make sure we don't clobber the GTT if it's within stolen memory */ - if (INTEL_GEN(dev_priv) <= 4 && + if (GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G33(dev_priv) && !IS_PINEVIEW(dev_priv) && !IS_G4X(dev_priv)) { struct resource stolen[2] = {*dsm, *dsm}; struct resource ggtt_res; @@ -384,7 +384,7 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv) return 0; } - if (intel_vtd_active() && INTEL_GEN(dev_priv) < 8) { + if (intel_vtd_active() && GT_GEN_RANGE(dev_priv, 0, 7)) { DRM_INFO("DMAR active, disabling use of stolen memory\n"); return 0; } diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c index 8a1976c523b0..430d4da67d8a 100644 --- a/drivers/gpu/drm/i915/i915_gem_tiling.c +++ b/drivers/gpu/drm/i915/i915_gem_tiling.c @@ -80,7 +80,7 @@ u32 i915_gem_fence_size(struct drm_i915_private *i915, GEM_BUG_ON(!stride); - if (INTEL_GEN(i915) >= 4) { + if (GT_GEN_RANGE(i915, 4, GEN_FOREVER)) { stride *= i915_gem_tile_height(tiling); GEM_BUG_ON(!IS_ALIGNED(stride, I965_FENCE_PAGE)); return roundup(size, stride); @@ -120,7 +120,7 @@ u32 i915_gem_fence_alignment(struct drm_i915_private *i915, u32 size, if (tiling == I915_TILING_NONE) return I915_GTT_MIN_ALIGNMENT; - if (INTEL_GEN(i915) >= 4) + if (GT_GEN_RANGE(i915, 4, GEN_FOREVER)) return I965_FENCE_PAGE; /* @@ -148,10 +148,10 @@ i915_tiling_ok(struct drm_i915_gem_object *obj, /* check maximum stride & object size */ /* i965+ stores the end address of the gtt mapping in the fence * reg, so dont bother to check the size */ - if (INTEL_GEN(i915) >= 7) { + if (GT_GEN_RANGE(i915, 7, GEN_FOREVER)) { if (stride / 128 > GEN7_FENCE_MAX_PITCH_VAL) return false; - } else if (INTEL_GEN(i915) >= 4) { + } else if (GT_GEN_RANGE(i915, 4, GEN_FOREVER)) { if (stride / 128 > I965_FENCE_MAX_PITCH_VAL) return false; } else { diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index c7b492b6cf4e..0de8ce65053d 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -411,13 +411,13 @@ static void error_print_instdone(struct drm_i915_error_state_buf *m, err_printf(m, " INSTDONE: 0x%08x\n", ee->instdone.instdone); - if (ee->engine_id != RCS || INTEL_GEN(m->i915) <= 3) + if (ee->engine_id != RCS || GT_GEN_RANGE(m->i915, 0, 3)) return; err_printf(m, " SC_INSTDONE: 0x%08x\n", ee->instdone.slice_common); - if (INTEL_GEN(m->i915) <= 6) + if (GT_GEN_RANGE(m->i915, 0, 6)) return; for_each_instdone_slice_subslice(m->i915, slice, subslice) @@ -492,7 +492,7 @@ static void error_print_engine(struct drm_i915_error_state_buf *m, upper_32_bits(start), lower_32_bits(start), upper_32_bits(end), lower_32_bits(end)); } - if (INTEL_GEN(m->i915) >= 4) { + if (GT_GEN_RANGE(m->i915, 4, GEN_FOREVER)) { err_printf(m, " BBADDR: 0x%08x_%08x\n", (u32)(ee->bbaddr>>32), (u32)ee->bbaddr); err_printf(m, " BB_STATE: 0x%08x\n", ee->bbstate); @@ -501,7 +501,7 @@ static void error_print_engine(struct drm_i915_error_state_buf *m, err_printf(m, " INSTPM: 0x%08x\n", ee->instpm); err_printf(m, " FADDR: 0x%08x %08x\n", upper_32_bits(ee->faddr), lower_32_bits(ee->faddr)); - if (INTEL_GEN(m->i915) >= 6) { + if (GT_GEN_RANGE(m->i915, 6, GEN_FOREVER)) { err_printf(m, " RC PSMI: 0x%08x\n", ee->rc_psmi); err_printf(m, " FAULT_REG: 0x%08x\n", ee->fault_reg); err_printf(m, " SYNC_0: 0x%08x\n", @@ -515,7 +515,7 @@ static void error_print_engine(struct drm_i915_error_state_buf *m, if (HAS_PPGTT(m->i915)) { err_printf(m, " GFX_MODE: 0x%08x\n", ee->vm_info.gfx_mode); - if (INTEL_GEN(m->i915) >= 8) { + if (GT_GEN_RANGE(m->i915, 8, GEN_FOREVER)) { int i; for (i = 0; i < 4; i++) err_printf(m, " PDP%d: 0x%016llx\n", @@ -710,10 +710,10 @@ int i915_error_state_to_str(struct drm_i915_error_state_buf *m, for (i = 0; i < error->nfence; i++) err_printf(m, " fence[%d] = %08llx\n", i, error->fence[i]); - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { err_printf(m, "ERROR: 0x%08x\n", error->error); - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) err_printf(m, "FAULT_TLB_DATA: 0x%08x 0x%08x\n", error->fault_data1, error->fault_data0); @@ -1106,10 +1106,10 @@ static void gem_record_fences(struct i915_gpu_state *error) struct drm_i915_private *dev_priv = error->i915; int i; - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { for (i = 0; i < dev_priv->num_fence_regs; i++) error->fence[i] = I915_READ64(FENCE_REG_GEN6_LO(i)); - } else if (INTEL_GEN(dev_priv) >= 4) { + } else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { for (i = 0; i < dev_priv->num_fence_regs; i++) error->fence[i] = I915_READ64(FENCE_REG_965_LO(i)); } else { @@ -1190,9 +1190,9 @@ static void error_record_engine_registers(struct i915_gpu_state *error, { struct drm_i915_private *dev_priv = engine->i915; - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { ee->rc_psmi = I915_READ(RING_PSMI_CTL(engine->mmio_base)); - if (INTEL_GEN(dev_priv) >= 8) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { ee->fault_reg = I915_READ(GEN8_RING_FAULT_REG); } else { gen6_record_semaphore_state(engine, ee); @@ -1200,13 +1200,13 @@ static void error_record_engine_registers(struct i915_gpu_state *error, } } - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { ee->faddr = I915_READ(RING_DMA_FADD(engine->mmio_base)); ee->ipeir = I915_READ(RING_IPEIR(engine->mmio_base)); ee->ipehr = I915_READ(RING_IPEHR(engine->mmio_base)); ee->instps = I915_READ(RING_INSTPS(engine->mmio_base)); ee->bbaddr = I915_READ(RING_BBADDR(engine->mmio_base)); - if (INTEL_GEN(dev_priv) >= 8) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { ee->faddr |= (u64) I915_READ(RING_DMA_FADD_UDW(engine->mmio_base)) << 32; ee->bbaddr |= (u64) I915_READ(RING_BBADDR_UDW(engine->mmio_base)) << 32; } @@ -1228,7 +1228,7 @@ static void error_record_engine_registers(struct i915_gpu_state *error, ee->head = I915_READ_HEAD(engine); ee->tail = I915_READ_TAIL(engine); ee->ctl = I915_READ_CTL(engine); - if (INTEL_GEN(dev_priv) > 2) + if (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER)) ee->mode = I915_READ_MODE(engine); if (!HWS_NEEDS_PHYSICAL(dev_priv)) { @@ -1278,7 +1278,7 @@ static void error_record_engine_registers(struct i915_gpu_state *error, else if (GT_GEN(dev_priv, 7)) ee->vm_info.pp_dir_base = I915_READ(RING_PP_DIR_BASE(engine)); - else if (INTEL_GEN(dev_priv) >= 8) + else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) for (i = 0; i < 4; i++) { ee->vm_info.pdp[i] = I915_READ(GEN8_RING_PDP_UDW(engine, i)); @@ -1648,7 +1648,7 @@ static void capture_reg_state(struct i915_gpu_state *error) if (GT_GEN(dev_priv, 7)) error->err_int = I915_READ(GEN7_ERR_INT); - if (INTEL_GEN(dev_priv) >= 8) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { error->fault_data0 = I915_READ(GEN8_FAULT_TLB_DATA0); error->fault_data1 = I915_READ(GEN8_FAULT_TLB_DATA1); } @@ -1660,16 +1660,16 @@ static void capture_reg_state(struct i915_gpu_state *error) } /* 2: Registers which belong to multiple generations */ - if (INTEL_GEN(dev_priv) >= 7) + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) error->forcewake = I915_READ_FW(FORCEWAKE_MT); - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { error->derrmr = I915_READ(DERRMR); error->error = I915_READ(ERROR_GEN6); error->done_reg = I915_READ(DONE_REG); } - if (INTEL_GEN(dev_priv) >= 5) + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) error->ccid = I915_READ(CCID); /* 3: Feature specific registers */ @@ -1679,7 +1679,7 @@ static void capture_reg_state(struct i915_gpu_state *error) } /* 4: Everything else */ - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { error->ier = I915_READ(GEN8_DE_MISC_IER); error->gtier[0] = I915_READ(GEN11_RENDER_COPY_INTR_ENABLE); error->gtier[1] = I915_READ(GEN11_VCS_VECS_INTR_ENABLE); @@ -1688,7 +1688,7 @@ static void capture_reg_state(struct i915_gpu_state *error) error->gtier[4] = I915_READ(GEN11_CRYPTO_RSVD_INTR_ENABLE); error->gtier[5] = I915_READ(GEN11_GUNIT_CSME_INTR_ENABLE); error->ngtier = 6; - } else if (INTEL_GEN(dev_priv) >= 8) { + } else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { error->ier = I915_READ(GEN8_DE_MISC_IER); for (i = 0; i < 4; i++) error->gtier[i] = I915_READ(GEN8_GT_IER(i)); diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index e53e9ccf90f6..04f2ecc09383 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -359,16 +359,16 @@ void gen5_disable_gt_irq(struct drm_i915_private *dev_priv, uint32_t mask) static i915_reg_t gen6_pm_iir(struct drm_i915_private *dev_priv) { - WARN_ON_ONCE(INTEL_GEN(dev_priv) >= 11); + WARN_ON_ONCE(GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)); - return INTEL_GEN(dev_priv) >= 8 ? GEN8_GT_IIR(2) : GEN6_PMIIR; + return GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER) ? GEN8_GT_IIR(2) : GEN6_PMIIR; } static i915_reg_t gen6_pm_imr(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) return GEN11_GPM_WGBOXPERF_INTR_MASK; - else if (INTEL_GEN(dev_priv) >= 8) + else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) return GEN8_GT_IMR(2); else return GEN6_PMIMR; @@ -376,9 +376,9 @@ static i915_reg_t gen6_pm_imr(struct drm_i915_private *dev_priv) static i915_reg_t gen6_pm_ier(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) return GEN11_GPM_WGBOXPERF_INTR_ENABLE; - else if (INTEL_GEN(dev_priv) >= 8) + else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) return GEN8_GT_IER(2); else return GEN6_PMIER; @@ -493,7 +493,7 @@ void gen6_enable_rps_interrupts(struct drm_i915_private *dev_priv) spin_lock_irq(&dev_priv->irq_lock); WARN_ON_ONCE(rps->pm_iir); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) WARN_ON_ONCE(gen11_reset_one_iir(dev_priv, 0, GEN11_GTPM)); else WARN_ON_ONCE(I915_READ(gen6_pm_iir(dev_priv)) & dev_priv->pm_rps_events); @@ -527,7 +527,7 @@ void gen6_disable_rps_interrupts(struct drm_i915_private *dev_priv) * state of the worker can be discarded. */ cancel_work_sync(&rps->work); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) gen11_reset_rps_interrupts(dev_priv); else gen6_reset_rps_interrupts(dev_priv); @@ -668,7 +668,7 @@ u32 i915_pipestat_enable_mask(struct drm_i915_private *dev_priv, lockdep_assert_held(&dev_priv->irq_lock); - if (INTEL_GEN(dev_priv) < 5) + if (GT_GEN_RANGE(dev_priv, 0, 4)) goto out; /* @@ -759,7 +759,7 @@ static void i915_enable_asle_pipestat(struct drm_i915_private *dev_priv) spin_lock_irq(&dev_priv->irq_lock); i915_enable_pipestat(dev_priv, PIPE_B, PIPE_LEGACY_BLC_EVENT_STATUS); - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) i915_enable_pipestat(dev_priv, PIPE_A, PIPE_LEGACY_BLC_EVENT_STATUS); @@ -1030,7 +1030,7 @@ static bool i915_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe, if (stime) *stime = ktime_get(); - if (GT_GEN(dev_priv, 2) || IS_G4X(dev_priv) || INTEL_GEN(dev_priv) >= 5) { + if (GT_GEN(dev_priv, 2) || IS_G4X(dev_priv) || GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { /* No obvious pixelcount register. Only query vertical * scanout position from Display scan line register. */ @@ -1090,7 +1090,7 @@ static bool i915_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe, else position += vtotal - vbl_end; - if (GT_GEN(dev_priv, 2) || IS_G4X(dev_priv) || INTEL_GEN(dev_priv) >= 5) { + if (GT_GEN(dev_priv, 2) || IS_G4X(dev_priv) || GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { *vpos = position; *hpos = 0; } else { @@ -1756,7 +1756,7 @@ static void display_pipe_crc_irq_handler(struct drm_i915_private *dev_priv, * don't trust that one either. */ if (pipe_crc->skipped <= 0 || - (INTEL_GEN(dev_priv) >= 8 && pipe_crc->skipped == 1)) { + (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER) && pipe_crc->skipped == 1)) { pipe_crc->skipped++; spin_unlock(&pipe_crc->lock); return; @@ -1806,12 +1806,12 @@ static void i9xx_pipe_crc_irq_handler(struct drm_i915_private *dev_priv, { uint32_t res1, res2; - if (INTEL_GEN(dev_priv) >= 3) + if (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER)) res1 = I915_READ(PIPE_CRC_RES_RES1_I915(pipe)); else res1 = 0; - if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER) || IS_G4X(dev_priv)) res2 = I915_READ(PIPE_CRC_RES_RES2_G4X(pipe)); else res2 = 0; @@ -1840,7 +1840,7 @@ static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir) spin_unlock(&dev_priv->irq_lock); } - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) return; if (HAS_VEBOX(dev_priv)) { @@ -2633,7 +2633,7 @@ static irqreturn_t ironlake_irq_handler(int irq, void *arg) if (gt_iir) { I915_WRITE(GTIIR, gt_iir); ret = IRQ_HANDLED; - if (INTEL_GEN(dev_priv) >= 6) + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) snb_gt_irq_handler(dev_priv, gt_iir); else ilk_gt_irq_handler(dev_priv, gt_iir); @@ -2643,13 +2643,13 @@ static irqreturn_t ironlake_irq_handler(int irq, void *arg) if (de_iir) { I915_WRITE(DEIIR, de_iir); ret = IRQ_HANDLED; - if (INTEL_GEN(dev_priv) >= 7) + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) ivb_display_irq_handler(dev_priv, de_iir); else ilk_display_irq_handler(dev_priv, de_iir); } - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { u32 pm_iir = I915_READ(GEN6_PMIIR); if (pm_iir) { I915_WRITE(GEN6_PMIIR, pm_iir); @@ -2753,7 +2753,7 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl) DRM_ERROR("The master control interrupt lied (DE MISC)!\n"); } - if (INTEL_GEN(dev_priv) >= 11 && (master_ctl & GEN11_DE_HPD_IRQ)) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) && (master_ctl & GEN11_DE_HPD_IRQ)) { iir = I915_READ(GEN11_DE_HPD_IIR); if (iir) { I915_WRITE(GEN11_DE_HPD_IIR, iir); @@ -2774,16 +2774,16 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl) ret = IRQ_HANDLED; tmp_mask = GEN8_AUX_CHANNEL_A; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) tmp_mask |= GEN9_AUX_CHANNEL_B | GEN9_AUX_CHANNEL_C | GEN9_AUX_CHANNEL_D; - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) tmp_mask |= ICL_AUX_CHANNEL_E; if (IS_CNL_WITH_PORT_F(dev_priv) || - INTEL_GEN(dev_priv) >= 11) + GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) tmp_mask |= CNL_AUX_CHANNEL_F; if (iir & tmp_mask) { @@ -2844,7 +2844,7 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl) intel_cpu_fifo_underrun_irq_handler(dev_priv, pipe); fault_errors = iir; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) fault_errors &= GEN9_DE_PIPE_IRQ_FAULT_ERRORS; else fault_errors &= GEN8_DE_PIPE_IRQ_FAULT_ERRORS; @@ -3246,7 +3246,7 @@ void i915_clear_error_registers(struct drm_i915_private *dev_priv) if (!GT_GEN(dev_priv, 2)) I915_WRITE(PGTBL_ER, I915_READ(PGTBL_ER)); - if (INTEL_GEN(dev_priv) < 4) + if (GT_GEN_RANGE(dev_priv, 0, 3)) I915_WRITE(IPEIR, I915_READ(IPEIR)); else I915_WRITE(IPEIR_I965, I915_READ(IPEIR_I965)); @@ -3263,11 +3263,11 @@ void i915_clear_error_registers(struct drm_i915_private *dev_priv) I915_WRITE(IIR, I915_MASTER_ERROR_INTERRUPT); } - if (INTEL_GEN(dev_priv) >= 8) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { I915_WRITE(GEN8_RING_FAULT_REG, I915_READ(GEN8_RING_FAULT_REG) & ~RING_FAULT_VALID); POSTING_READ(GEN8_RING_FAULT_REG); - } else if (INTEL_GEN(dev_priv) >= 6) { + } else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { struct intel_engine_cs *engine; enum intel_engine_id id; @@ -3417,7 +3417,7 @@ static int ironlake_enable_vblank(struct drm_device *dev, unsigned int pipe) { struct drm_i915_private *dev_priv = to_i915(dev); unsigned long irqflags; - uint32_t bit = INTEL_GEN(dev_priv) >= 7 ? + uint32_t bit = GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER) ? DE_PIPE_VBLANK_IVB(pipe) : DE_PIPE_VBLANK(pipe); spin_lock_irqsave(&dev_priv->irq_lock, irqflags); @@ -3479,7 +3479,7 @@ static void ironlake_disable_vblank(struct drm_device *dev, unsigned int pipe) { struct drm_i915_private *dev_priv = to_i915(dev); unsigned long irqflags; - uint32_t bit = INTEL_GEN(dev_priv) >= 7 ? + uint32_t bit = GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER) ? DE_PIPE_VBLANK_IVB(pipe) : DE_PIPE_VBLANK(pipe); spin_lock_irqsave(&dev_priv->irq_lock, irqflags); @@ -3531,7 +3531,7 @@ static void ibx_irq_pre_postinstall(struct drm_device *dev) static void gen5_gt_irq_reset(struct drm_i915_private *dev_priv) { GEN3_IRQ_RESET(GT); - if (INTEL_GEN(dev_priv) >= 6) + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) GEN3_IRQ_RESET(GEN6_PM); } @@ -3932,12 +3932,12 @@ static void ilk_hpd_irq_setup(struct drm_i915_private *dev_priv) { u32 hotplug_irqs, enabled_irqs; - if (INTEL_GEN(dev_priv) >= 8) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { hotplug_irqs = GEN8_PORT_DP_A_HOTPLUG; enabled_irqs = intel_hpd_enabled_irqs(dev_priv, hpd_bdw); bdw_update_port_irq(dev_priv, hotplug_irqs, enabled_irqs); - } else if (INTEL_GEN(dev_priv) >= 7) { + } else if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) { hotplug_irqs = DE_DP_A_HOTPLUG_IVB; enabled_irqs = intel_hpd_enabled_irqs(dev_priv, hpd_ivb); @@ -4050,7 +4050,7 @@ static void gen5_gt_irq_postinstall(struct drm_device *dev) GEN3_IRQ_INIT(GT, dev_priv->gt_irq_mask, gt_irqs); - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { /* * RPS interrupts will get enabled/disabled on demand when RPS * itself is enabled/disabled. @@ -4070,7 +4070,7 @@ static int ironlake_irq_postinstall(struct drm_device *dev) struct drm_i915_private *dev_priv = to_i915(dev); u32 display_mask, extra_mask; - if (INTEL_GEN(dev_priv) >= 7) { + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) { display_mask = (DE_MASTER_IRQ_CONTROL | DE_GSE_IVB | DE_PCH_EVENT_IVB | DE_AUX_CHANNEL_A_IVB); extra_mask = (DE_PIPEC_VBLANK_IVB | DE_PIPEB_VBLANK_IVB | @@ -4204,10 +4204,10 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv) u32 de_misc_masked = GEN8_DE_EDP_PSR; enum pipe pipe; - if (INTEL_GEN(dev_priv) <= 10) + if (GT_GEN_RANGE(dev_priv, 0, 10)) de_misc_masked |= GEN8_DE_MISC_GSE; - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { de_pipe_masked |= GEN9_DE_PIPE_IRQ_FAULT_ERRORS; de_port_masked |= GEN9_AUX_CHANNEL_B | GEN9_AUX_CHANNEL_C | GEN9_AUX_CHANNEL_D; @@ -4217,10 +4217,10 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv) de_pipe_masked |= GEN8_DE_PIPE_IRQ_FAULT_ERRORS; } - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) de_port_masked |= ICL_AUX_CHANNEL_E; - if (IS_CNL_WITH_PORT_F(dev_priv) || INTEL_GEN(dev_priv) >= 11) + if (IS_CNL_WITH_PORT_F(dev_priv) || GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) de_port_masked |= CNL_AUX_CHANNEL_F; de_pipe_enables = de_pipe_masked | GEN8_PIPE_VBLANK | @@ -4248,7 +4248,7 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv) GEN3_IRQ_INIT(GEN8_DE_PORT_, ~de_port_masked, de_port_enables); GEN3_IRQ_INIT(GEN8_DE_MISC_, ~de_misc_masked, de_misc_masked); - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { u32 de_hpd_masked = 0; u32 de_hpd_enables = GEN11_DE_TC_HOTPLUG_MASK | GEN11_DE_TBT_HOTPLUG_MASK; @@ -4827,16 +4827,16 @@ void intel_irq_init(struct drm_i915_private *dev_priv) * * TODO: verify if this can be reproduced on VLV,CHV. */ - if (INTEL_GEN(dev_priv) <= 7) + if (GT_GEN_RANGE(dev_priv, 0, 7)) rps->pm_intrmsk_mbz |= GEN6_PM_RP_UP_EI_EXPIRED; - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) rps->pm_intrmsk_mbz |= GEN8_PMINTR_DISABLE_REDIRECT_TO_GUC; if (GT_GEN(dev_priv, 2)) { /* Gen2 doesn't have a hardware frame counter */ dev->max_vblank_count = 0; - } else if (IS_G4X(dev_priv) || INTEL_GEN(dev_priv) >= 5) { + } else if (IS_G4X(dev_priv) || GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { dev->max_vblank_count = 0xffffffff; /* full 32 bit counter */ dev->driver->get_vblank_counter = g4x_get_vblank_counter; } else { @@ -4883,7 +4883,7 @@ void intel_irq_init(struct drm_i915_private *dev_priv) dev->driver->enable_vblank = i965_enable_vblank; dev->driver->disable_vblank = i965_disable_vblank; dev_priv->display.hpd_irq_setup = i915_hpd_irq_setup; - } else if (INTEL_GEN(dev_priv) >= 11) { + } else if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { dev->driver->irq_handler = gen11_irq_handler; dev->driver->irq_preinstall = gen11_irq_reset; dev->driver->irq_postinstall = gen11_irq_postinstall; @@ -4891,7 +4891,7 @@ void intel_irq_init(struct drm_i915_private *dev_priv) dev->driver->enable_vblank = gen8_enable_vblank; dev->driver->disable_vblank = gen8_disable_vblank; dev_priv->display.hpd_irq_setup = gen11_hpd_irq_setup; - } else if (INTEL_GEN(dev_priv) >= 8) { + } else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { dev->driver->irq_handler = gen8_irq_handler; dev->driver->irq_preinstall = gen8_irq_reset; dev->driver->irq_postinstall = gen8_irq_postinstall; diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index afbccc144c66..5cc1ffa621a7 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -3245,7 +3245,7 @@ int i915_perf_add_config_ioctl(struct drm_device *dev, void *data, goto reg_err; } - if (INTEL_GEN(dev_priv) < 8) { + if (GT_GEN_RANGE(dev_priv, 0, 7)) { if (args->n_flex_regs != 0) { err = -EINVAL; goto reg_err; diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c index d6c8f8fdfda5..93dace5fce3a 100644 --- a/drivers/gpu/drm/i915/i915_pmu.c +++ b/drivers/gpu/drm/i915/i915_pmu.c @@ -326,7 +326,7 @@ engine_event_status(struct intel_engine_cs *engine, case I915_SAMPLE_WAIT: break; case I915_SAMPLE_SEMA: - if (INTEL_GEN(engine->i915) < 6) + if (GT_GEN_RANGE(engine->i915, 0, 5)) return -ENODEV; break; default: @@ -346,7 +346,7 @@ config_status(struct drm_i915_private *i915, u64 config) return -ENODEV; /* Fall-through. */ case I915_PMU_REQUESTED_FREQUENCY: - if (INTEL_GEN(i915) < 6) + if (GT_GEN_RANGE(i915, 0, 5)) return -ENODEV; break; case I915_PMU_INTERRUPTS: @@ -1036,7 +1036,7 @@ void i915_pmu_register(struct drm_i915_private *i915) { int ret; - if (INTEL_GEN(i915) <= 2) { + if (GT_GEN_RANGE(i915, 0, 2)) { DRM_INFO("PMU not supported for this GPU."); return; } diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 33febd5a9eac..0c806230ca35 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -3813,7 +3813,7 @@ enum i915_power_well_id { #define INTERVAL_1_28_US(us) roundup(((us) * 100) >> 7, 25) #define INTERVAL_1_33_US(us) (((us) * 3) >> 2) #define INTERVAL_0_833_US(us) (((us) * 6) / 5) -#define GT_INTERVAL_FROM_US(dev_priv, us) (INTEL_GEN(dev_priv) >= 9 ? \ +#define GT_INTERVAL_FROM_US(dev_priv, us) (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) ? \ (GT_GEN9_LP(dev_priv) ? \ INTERVAL_0_833_US(us) : \ INTERVAL_1_33_US(us)) : \ @@ -3822,7 +3822,7 @@ enum i915_power_well_id { #define INTERVAL_1_28_TO_US(interval) (((interval) << 7) / 100) #define INTERVAL_1_33_TO_US(interval) (((interval) << 2) / 3) #define INTERVAL_0_833_TO_US(interval) (((interval) * 5) / 6) -#define GT_PM_INTERVAL_TO_US(dev_priv, interval) (INTEL_GEN(dev_priv) >= 9 ? \ +#define GT_PM_INTERVAL_TO_US(dev_priv, interval) (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) ? \ (GT_GEN9_LP(dev_priv) ? \ INTERVAL_0_833_TO_US(interval) : \ INTERVAL_1_33_TO_US(interval)) : \ diff --git a/drivers/gpu/drm/i915/i915_suspend.c b/drivers/gpu/drm/i915/i915_suspend.c index b47b822fa6d6..e7a57005ff0e 100644 --- a/drivers/gpu/drm/i915/i915_suspend.c +++ b/drivers/gpu/drm/i915/i915_suspend.c @@ -32,25 +32,25 @@ static void i915_save_display(struct drm_i915_private *dev_priv) { /* Display arbitration control */ - if (INTEL_GEN(dev_priv) <= 4) + if (GT_GEN_RANGE(dev_priv, 0, 4)) dev_priv->regfile.saveDSPARB = I915_READ(DSPARB); /* save FBC interval */ - if (HAS_FBC(dev_priv) && INTEL_GEN(dev_priv) <= 4 && !IS_G4X(dev_priv)) + if (HAS_FBC(dev_priv) && GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv)) dev_priv->regfile.saveFBC_CONTROL = I915_READ(FBC_CONTROL); } static void i915_restore_display(struct drm_i915_private *dev_priv) { /* Display arbitration */ - if (INTEL_GEN(dev_priv) <= 4) + if (GT_GEN_RANGE(dev_priv, 0, 4)) I915_WRITE(DSPARB, dev_priv->regfile.saveDSPARB); /* only restore FBC info on the platform that supports FBC*/ intel_fbc_global_disable(dev_priv); /* restore FBC interval */ - if (HAS_FBC(dev_priv) && INTEL_GEN(dev_priv) <= 4 && !IS_G4X(dev_priv)) + if (HAS_FBC(dev_priv) && GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv)) I915_WRITE(FBC_CONTROL, dev_priv->regfile.saveFBC_CONTROL); i915_redisable_vga(dev_priv); @@ -70,7 +70,7 @@ int i915_save_state(struct drm_i915_private *dev_priv) &dev_priv->regfile.saveGCDGMBUS); /* Cache mode state */ - if (INTEL_GEN(dev_priv) < 7) + if (GT_GEN_RANGE(dev_priv, 0, 6)) dev_priv->regfile.saveCACHE_MODE_0 = I915_READ(CACHE_MODE_0); /* Memory Arbitration state */ @@ -114,7 +114,7 @@ int i915_restore_state(struct drm_i915_private *dev_priv) i915_restore_display(dev_priv); /* Cache mode state */ - if (INTEL_GEN(dev_priv) < 7) + if (GT_GEN_RANGE(dev_priv, 0, 6)) I915_WRITE(CACHE_MODE_0, dev_priv->regfile.saveCACHE_MODE_0 | 0xffff0000); diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c index e5e6f6bb2b05..c8786a5654e4 100644 --- a/drivers/gpu/drm/i915/i915_sysfs.c +++ b/drivers/gpu/drm/i915/i915_sysfs.c @@ -617,7 +617,7 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv) ret = 0; if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) ret = sysfs_create_files(&kdev->kobj, vlv_attrs); - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) ret = sysfs_create_files(&kdev->kobj, gen6_attrs); if (ret) DRM_ERROR("RPS sysfs setup failed\n"); diff --git a/drivers/gpu/drm/i915/intel_atomic.c b/drivers/gpu/drm/i915/intel_atomic.c index 74dc23f67151..acd8804b0368 100644 --- a/drivers/gpu/drm/i915/intel_atomic.c +++ b/drivers/gpu/drm/i915/intel_atomic.c @@ -248,7 +248,7 @@ static void intel_atomic_setup_scaler(struct intel_crtc_scaler_state *scaler_sta if (plane_state->linked_plane) mode |= PS_PLANE_Y_SEL(plane_state->linked_plane->id); } - } else if (INTEL_GEN(dev_priv) > 9 || IS_GEMINILAKE(dev_priv)) { + } else if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) { mode = PS_SCALER_MODE_NORMAL; } else if (num_scalers_need == 1 && intel_crtc->num_scalers > 1) { /* diff --git a/drivers/gpu/drm/i915/intel_audio.c b/drivers/gpu/drm/i915/intel_audio.c index 7f47bacbef20..c2e8a53710b8 100644 --- a/drivers/gpu/drm/i915/intel_audio.c +++ b/drivers/gpu/drm/i915/intel_audio.c @@ -733,7 +733,7 @@ void intel_init_audio_hooks(struct drm_i915_private *dev_priv) } else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { dev_priv->display.audio_codec_enable = ilk_audio_codec_enable; dev_priv->display.audio_codec_disable = ilk_audio_codec_disable; - } else if (IS_HASWELL(dev_priv) || INTEL_GEN(dev_priv) >= 8) { + } else if (IS_HASWELL(dev_priv) || GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { dev_priv->display.audio_codec_enable = hsw_audio_codec_enable; dev_priv->display.audio_codec_disable = hsw_audio_codec_disable; } else if (HAS_PCH_SPLIT(dev_priv)) { diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c index 8fa3c79c5f4a..69e41089d7de 100644 --- a/drivers/gpu/drm/i915/intel_bios.c +++ b/drivers/gpu/drm/i915/intel_bios.c @@ -516,7 +516,7 @@ parse_driver_features(struct drm_i915_private *dev_priv, if (!driver) return; - if (INTEL_GEN(dev_priv) >= 5) { + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { /* * Note that we consider BDB_DRIVER_FEATURE_INT_SDVO_LVDS * to mean "eDP". The VBT spec doesn't agree with that @@ -712,7 +712,7 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb) */ if (bdb->version >= 205 && (GT_GEN9_BC(dev_priv) || IS_GEMINILAKE(dev_priv) || - INTEL_GEN(dev_priv) >= 10)) { + GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER))) { switch (psr_table->tp1_wakeup_time) { case 0: dev_priv->vbt.psr.tp1_wakeup_time_us = 500; diff --git a/drivers/gpu/drm/i915/intel_cdclk.c b/drivers/gpu/drm/i915/intel_cdclk.c index 37835d547d68..87af19d11d1e 100644 --- a/drivers/gpu/drm/i915/intel_cdclk.c +++ b/drivers/gpu/drm/i915/intel_cdclk.c @@ -2138,7 +2138,7 @@ void intel_set_cdclk(struct drm_i915_private *dev_priv, static int intel_pixel_rate_to_cdclk(struct drm_i915_private *dev_priv, int pixel_rate) { - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) return DIV_ROUND_UP(pixel_rate, 2); else if (GT_GEN(dev_priv, 9) || IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) @@ -2197,7 +2197,7 @@ int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state) * at probe time. If we probe without displays, we'll still end up using * the platform minimum CDCLK, failing audio probe. */ - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) min_cdclk = max(2 * 96000, min_cdclk); /* @@ -2535,14 +2535,14 @@ static int intel_compute_max_dotclk(struct drm_i915_private *dev_priv) { int max_cdclk_freq = dev_priv->max_cdclk_freq; - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) return 2 * max_cdclk_freq; else if (GT_GEN(dev_priv, 9) || IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) return max_cdclk_freq; else if (IS_CHERRYVIEW(dev_priv)) return max_cdclk_freq*95/100; - else if (INTEL_GEN(dev_priv) < 4) + else if (GT_GEN_RANGE(dev_priv, 0, 3)) return 2*max_cdclk_freq*90/100; else return max_cdclk_freq*90/100; diff --git a/drivers/gpu/drm/i915/intel_color.c b/drivers/gpu/drm/i915/intel_color.c index 91a46e4f3453..19a96b6dc9b5 100644 --- a/drivers/gpu/drm/i915/intel_color.c +++ b/drivers/gpu/drm/i915/intel_color.c @@ -146,7 +146,7 @@ static void ilk_load_csc_matrix(struct drm_crtc_state *crtc_state) * FIXME if there's a gamma LUT after the CSC, we should * do the range compression using the gamma LUT instead. */ - if (INTEL_GEN(dev_priv) >= 8 || IS_HASWELL(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER) || IS_HASWELL(dev_priv)) limited_color_range = intel_crtc_state->limited_color_range; if (intel_crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR420 || @@ -229,7 +229,7 @@ static void ilk_load_csc_matrix(struct drm_crtc_state *crtc_state) I915_WRITE(PIPE_CSC_PREOFF_ME(pipe), 0); I915_WRITE(PIPE_CSC_PREOFF_LO(pipe), 0); - if (INTEL_GEN(dev_priv) > 6) { + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) { uint16_t postoff = 0; if (limited_color_range) diff --git a/drivers/gpu/drm/i915/intel_crt.c b/drivers/gpu/drm/i915/intel_crt.c index b37224227420..dbc97c29f7e5 100644 --- a/drivers/gpu/drm/i915/intel_crt.c +++ b/drivers/gpu/drm/i915/intel_crt.c @@ -156,7 +156,7 @@ static void intel_crt_set_dpms(struct intel_encoder *encoder, const struct drm_display_mode *adjusted_mode = &crtc_state->base.adjusted_mode; u32 adpa; - if (INTEL_GEN(dev_priv) >= 5) + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) adpa = ADPA_HOTPLUG_BITS; else adpa = 0; @@ -833,7 +833,7 @@ intel_crt_detect(struct drm_connector *connector, if (ret > 0) { if (intel_crt_detect_ddc(connector)) status = connector_status_connected; - else if (INTEL_GEN(dev_priv) < 4) + else if (GT_GEN_RANGE(dev_priv, 0, 3)) status = intel_crt_load_detect(crt, to_intel_crtc(connector->state->crtc)->pipe); else if (i915_modparams.load_detect_test) @@ -883,7 +883,7 @@ void intel_crt_reset(struct drm_encoder *encoder) struct drm_i915_private *dev_priv = to_i915(encoder->dev); struct intel_crt *crt = intel_encoder_to_crt(to_intel_encoder(encoder)); - if (INTEL_GEN(dev_priv) >= 5) { + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { u32 adpa; adpa = I915_READ(crt->adpa_reg); diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c index c8390eebe8e3..2664b782054d 100644 --- a/drivers/gpu/drm/i915/intel_ddi.c +++ b/drivers/gpu/drm/i915/intel_ddi.c @@ -1369,7 +1369,7 @@ static int cnl_calc_wrpll_link(struct drm_i915_private *dev_priv, uint32_t cfgcr0, cfgcr1; uint32_t p0, p1, p2, dco_freq, ref_clock; - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { cfgcr0 = I915_READ(ICL_DPLL_CFGCR0(pll_id)); cfgcr1 = I915_READ(ICL_DPLL_CFGCR1(pll_id)); } else { @@ -1745,7 +1745,7 @@ static void intel_ddi_clock_get(struct intel_encoder *encoder, bxt_ddi_clock_get(encoder, pipe_config); else if (GT_GEN9_BC(dev_priv)) skl_ddi_clock_get(encoder, pipe_config); - else if (INTEL_GEN(dev_priv) <= 8) + else if (GT_GEN_RANGE(dev_priv, 0, 8)) hsw_ddi_clock_get(encoder, pipe_config); } @@ -2888,7 +2888,7 @@ static void intel_ddi_clk_select(struct intel_encoder *encoder, I915_WRITE(DPLL_CTRL2, val); - } else if (INTEL_GEN(dev_priv) < 9) { + } else if (GT_GEN_RANGE(dev_priv, 0, 8)) { I915_WRITE(PORT_CLK_SEL(port), hsw_pll_to_ddi_pll_sel(pll)); } @@ -2909,7 +2909,7 @@ static void intel_ddi_clk_disable(struct intel_encoder *encoder) } else if (GT_GEN9_BC(dev_priv)) { I915_WRITE(DPLL_CTRL2, I915_READ(DPLL_CTRL2) | DPLL_CTRL2_DDI_CLK_OFF(port)); - } else if (INTEL_GEN(dev_priv) < 9) { + } else if (GT_GEN_RANGE(dev_priv, 0, 8)) { I915_WRITE(PORT_CLK_SEL(port), PORT_CLK_SEL_NONE); } } @@ -3084,7 +3084,7 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder, if (!is_mst) intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON); intel_dp_start_link_train(intel_dp); - if (port != PORT_A || INTEL_GEN(dev_priv) >= 9) + if (port != PORT_A || GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) intel_dp_stop_link_train(intel_dp); icl_enable_phy_clock_gating(dig_port); @@ -3318,7 +3318,7 @@ static void intel_enable_ddi_dp(struct intel_encoder *encoder, struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base); enum port port = encoder->port; - if (port == PORT_A && INTEL_GEN(dev_priv) < 9) + if (port == PORT_A && GT_GEN_RANGE(dev_priv, 0, 8)) intel_dp_stop_link_train(intel_dp); intel_edp_backlight_on(crtc_state, conn_state); @@ -3974,7 +3974,7 @@ intel_ddi_max_lanes(struct intel_digital_port *intel_dport) enum port port = intel_dport->base.port; int max_lanes = 4; - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) return max_lanes; if (port == PORT_A || port == PORT_E) { @@ -4060,7 +4060,7 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port) for_each_pipe(dev_priv, pipe) intel_encoder->crtc_mask |= BIT(pipe); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) intel_dig_port->saved_port_bits = I915_READ(DDI_BUF_CTL(port)) & DDI_BUF_PORT_REVERSAL; else diff --git a/drivers/gpu/drm/i915/intel_device_info.c b/drivers/gpu/drm/i915/intel_device_info.c index ab967781f495..bad3eb2428ac 100644 --- a/drivers/gpu/drm/i915/intel_device_info.c +++ b/drivers/gpu/drm/i915/intel_device_info.c @@ -648,7 +648,7 @@ static u32 read_timestamp_frequency(struct drm_i915_private *dev_priv) u32 f19_2_mhz = 19200; u32 f24_mhz = 24000; - if (INTEL_GEN(dev_priv) <= 4) { + if (GT_GEN_RANGE(dev_priv, 0, 4)) { /* PRMs say: * * "The value in this register increments once every 16 @@ -656,7 +656,7 @@ static u32 read_timestamp_frequency(struct drm_i915_private *dev_priv) * (“CLKCFG”) MCHBAR register) */ return dev_priv->rawclk_freq / 16; - } else if (INTEL_GEN(dev_priv) <= 8) { + } else if (GT_GEN_RANGE(dev_priv, 0, 8)) { /* PRMs say: * * "The PCU TSC counts 10ns increments; this timestamp @@ -664,7 +664,7 @@ static u32 read_timestamp_frequency(struct drm_i915_private *dev_priv) * rolling over every 1.5 hours). */ return f12_5_mhz; - } else if (INTEL_GEN(dev_priv) <= 9) { + } else if (GT_GEN_RANGE(dev_priv, 0, 9)) { u32 ctc_reg = I915_READ(CTC_MODE); u32 freq = 0; @@ -682,7 +682,7 @@ static u32 read_timestamp_frequency(struct drm_i915_private *dev_priv) } return freq; - } else if (INTEL_GEN(dev_priv) <= 11) { + } else if (GT_GEN_RANGE(dev_priv, 0, 11)) { u32 ctc_reg = I915_READ(CTC_MODE); u32 freq = 0; @@ -696,7 +696,7 @@ static u32 read_timestamp_frequency(struct drm_i915_private *dev_priv) } else { u32 rpm_config_reg = I915_READ(RPM_CONFIG0); - if (INTEL_GEN(dev_priv) <= 10) + if (GT_GEN_RANGE(dev_priv, 0, 10)) freq = gen10_get_crystal_clock_freq(dev_priv, rpm_config_reg); else @@ -741,7 +741,7 @@ void intel_device_info_runtime_init(struct intel_device_info *info) container_of(info, struct drm_i915_private, info); enum pipe pipe; - if (INTEL_GEN(dev_priv) >= 10) { + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { for_each_pipe(dev_priv, pipe) info->num_scalers[pipe] = 2; } else if (GT_GEN(dev_priv, 9)) { @@ -774,7 +774,7 @@ void intel_device_info_runtime_init(struct intel_device_info *info) } else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { for_each_pipe(dev_priv, pipe) info->num_sprites[pipe] = 2; - } else if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) { + } else if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER) || IS_G4X(dev_priv)) { for_each_pipe(dev_priv, pipe) info->num_sprites[pipe] = 1; } @@ -851,7 +851,7 @@ void intel_device_info_runtime_init(struct intel_device_info *info) gen9_sseu_info_init(dev_priv); else if (GT_GEN(dev_priv, 10)) gen10_sseu_info_init(dev_priv); - else if (INTEL_GEN(dev_priv) >= 11) + else if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) gen11_sseu_info_init(dev_priv); if (GT_GEN(dev_priv, 6) && intel_vtd_active()) { @@ -883,7 +883,7 @@ void intel_device_info_init_mmio(struct drm_i915_private *dev_priv) u32 media_fuse; unsigned int i; - if (INTEL_GEN(dev_priv) < 11) + if (GT_GEN_RANGE(dev_priv, 0, 10)) return; media_fuse = ~I915_READ(GEN11_GT_VEBOX_VDBOX_DISABLE); diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index b13956966a58..454a7e86adc8 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -1023,7 +1023,7 @@ intel_wait_for_pipe_off(const struct intel_crtc_state *old_crtc_state) struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { enum transcoder cpu_transcoder = old_crtc_state->cpu_transcoder; i915_reg_t reg = PIPECONF(cpu_transcoder); @@ -1480,7 +1480,7 @@ static void i9xx_enable_pll(struct intel_crtc *crtc, POSTING_READ(reg); udelay(150); - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { I915_WRITE(DPLL_MD(crtc->pipe), crtc_state->dpll_hw_state.dpll_md); } else { @@ -1968,12 +1968,12 @@ static unsigned int intel_cursor_alignment(const struct drm_i915_private *dev_pr static unsigned int intel_linear_alignment(const struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) return 256 * 1024; else if (IS_I965G(dev_priv) || IS_I965GM(dev_priv) || IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) return 128 * 1024; - else if (INTEL_GEN(dev_priv) >= 4) + else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) return 4 * 1024; else return 0; @@ -1992,7 +1992,7 @@ static unsigned int intel_surf_alignment(const struct drm_framebuffer *fb, case DRM_FORMAT_MOD_LINEAR: return intel_linear_alignment(dev_priv); case I915_FORMAT_MOD_X_TILED: - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) return 256 * 1024; return 0; case I915_FORMAT_MOD_Y_TILED_CCS: @@ -2011,7 +2011,7 @@ static bool intel_plane_uses_fence(const struct intel_plane_state *plane_state) struct intel_plane *plane = to_intel_plane(plane_state->base.plane); struct drm_i915_private *dev_priv = to_i915(plane->base.dev); - return INTEL_GEN(dev_priv) < 4 || plane->has_fbc; + return GT_GEN_RANGE(dev_priv, 0, 3) || plane->has_fbc; } struct i915_vma * @@ -2087,7 +2087,7 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, * mode that matches the user configuration. */ ret = i915_vma_pin_fence(vma); - if (ret != 0 && INTEL_GEN(dev_priv) < 4) { + if (ret != 0 && GT_GEN_RANGE(dev_priv, 0, 3)) { i915_gem_object_unpin_from_display_plane(vma); vma = ERR_PTR(ret); goto err; @@ -3145,12 +3145,12 @@ i9xx_plane_max_stride(struct intel_plane *plane, if (!HAS_GMCH_DISPLAY(dev_priv)) { return 32*1024; - } else if (INTEL_GEN(dev_priv) >= 4) { + } else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { if (modifier == I915_FORMAT_MOD_X_TILED) return 16*1024; else return 32*1024; - } else if (INTEL_GEN(dev_priv) >= 3) { + } else if (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER)) { if (modifier == I915_FORMAT_MOD_X_TILED) return 8*1024; else @@ -3182,7 +3182,7 @@ static u32 i9xx_plane_ctl(const struct intel_crtc_state *crtc_state, if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) dspcntr |= DISPPLANE_PIPE_CSC_ENABLE; - if (INTEL_GEN(dev_priv) < 5) + if (GT_GEN_RANGE(dev_priv, 0, 4)) dspcntr |= DISPPLANE_SEL_PIPE(crtc->pipe); switch (fb->format->format) { @@ -3212,7 +3212,7 @@ static u32 i9xx_plane_ctl(const struct intel_crtc_state *crtc_state, return 0; } - if (INTEL_GEN(dev_priv) >= 4 && + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER) && fb->modifier == I915_FORMAT_MOD_X_TILED) dspcntr |= DISPPLANE_TILED; @@ -3245,7 +3245,7 @@ int i9xx_check_plane_surface(struct intel_plane_state *plane_state) intel_add_fb_offsets(&src_x, &src_y, plane_state, 0); - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) offset = intel_plane_compute_aligned_offset(&src_x, &src_y, plane_state, 0); else @@ -3321,14 +3321,14 @@ static void i9xx_update_plane(struct intel_plane *plane, linear_offset = intel_fb_xy_to_linear(x, y, plane_state, 0); - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) dspaddr_offset = plane_state->color_plane[0].offset; else dspaddr_offset = linear_offset; spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); - if (INTEL_GEN(dev_priv) < 4) { + if (GT_GEN_RANGE(dev_priv, 0, 3)) { /* pipesrc and dspsize control the size that is scaled from, * which should always be the user's requested size. */ @@ -3352,7 +3352,7 @@ static void i9xx_update_plane(struct intel_plane *plane, intel_plane_ggtt_offset(plane_state) + dspaddr_offset); I915_WRITE_FW(DSPOFFSET(i9xx_plane), (y << 16) | x); - } else if (INTEL_GEN(dev_priv) >= 4) { + } else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { I915_WRITE_FW(DSPSURF(i9xx_plane), intel_plane_ggtt_offset(plane_state) + dspaddr_offset); @@ -3378,7 +3378,7 @@ static void i9xx_disable_plane(struct intel_plane *plane, spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); I915_WRITE_FW(DSPCNTR(i9xx_plane), 0); - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) I915_WRITE_FW(DSPSURF(i9xx_plane), 0); else I915_WRITE_FW(DSPADDR(i9xx_plane), 0); @@ -3409,7 +3409,7 @@ static bool i9xx_plane_get_hw_state(struct intel_plane *plane, ret = val & DISPLAY_PLANE_ENABLE; - if (INTEL_GEN(dev_priv) >= 5) + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) *pipe = plane->pipe; else *pipe = (val & DISPPLANE_SEL_PIPE_MASK) >> @@ -3619,7 +3619,7 @@ u32 skl_plane_ctl(const struct intel_crtc_state *crtc_state, plane_ctl = PLANE_CTL_ENABLE; - if (INTEL_GEN(dev_priv) < 10 && !IS_GEMINILAKE(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 0, 9) && !IS_GEMINILAKE(dev_priv)) { plane_ctl |= skl_plane_ctl_alpha(plane_state); plane_ctl |= PLANE_CTL_PIPE_GAMMA_ENABLE | @@ -3637,7 +3637,7 @@ u32 skl_plane_ctl(const struct intel_crtc_state *crtc_state, plane_ctl |= skl_plane_ctl_tiling(fb->modifier); plane_ctl |= skl_plane_ctl_rotate(rotation & DRM_MODE_ROTATE_MASK); - if (INTEL_GEN(dev_priv) >= 10) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) plane_ctl |= cnl_plane_ctl_flip(rotation & DRM_MODE_REFLECT_MASK); @@ -3658,7 +3658,7 @@ u32 glk_plane_color_ctl(const struct intel_crtc_state *crtc_state, struct intel_plane *plane = to_intel_plane(plane_state->base.plane); u32 plane_color_ctl = 0; - if (INTEL_GEN(dev_priv) < 11) { + if (GT_GEN_RANGE(dev_priv, 0, 10)) { plane_color_ctl |= PLANE_COLOR_PIPE_GAMMA_ENABLE; plane_color_ctl |= PLANE_COLOR_PIPE_CSC_ENABLE; } @@ -3722,7 +3722,7 @@ __intel_display_resume(struct drm_device *dev, static bool gpu_reset_clobbers_display(struct drm_i915_private *dev_priv) { return intel_has_gpu_reset(dev_priv) && - INTEL_GEN(dev_priv) < 5 && !IS_G4X(dev_priv); + GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv); } void intel_prepare_reset(struct drm_i915_private *dev_priv) @@ -3858,7 +3858,7 @@ static void intel_update_pipe_config(const struct intel_crtc_state *old_crtc_sta (new_crtc_state->pipe_src_h - 1)); /* on skylake this is done by detaching scalers */ - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { skl_detach_scalers(new_crtc_state); if (new_crtc_state->pch_pfit.enabled) @@ -4823,7 +4823,7 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach, * Once NV12 is enabled, handle it here while allocating scaler * for NV12. */ - if (INTEL_GEN(dev_priv) >= 9 && crtc_state->base.enable && + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) && crtc_state->base.enable && need_scaler && adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) { DRM_DEBUG_KMS("Pipe/Plane scaling not supported with IF-ID mode\n"); return -EINVAL; @@ -5653,7 +5653,7 @@ static void haswell_crtc_enable(struct intel_crtc_state *pipe_config, if (pipe_config->shared_dpll) intel_enable_shared_dpll(pipe_config); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) icl_map_plls_to_ports(crtc, pipe_config, old_state); intel_encoders_pre_enable(crtc, pipe_config, old_state); @@ -5692,7 +5692,7 @@ static void haswell_crtc_enable(struct intel_crtc_state *pipe_config, if (psl_clkgate_wa) glk_pipe_scaler_clock_gating_wa(dev_priv, pipe, true); - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) skylake_pfit_enable(pipe_config); else ironlake_pfit_enable(pipe_config); @@ -5707,7 +5707,7 @@ static void haswell_crtc_enable(struct intel_crtc_state *pipe_config, * Display WA #1153: enable hardware to bypass the alpha math * and rounding for per-pixel values 00 and 0xff */ - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { pipe_chicken = I915_READ(PIPE_CHICKEN(pipe)); if (!(pipe_chicken & PER_PIXEL_ALPHA_BYPASS_EN)) I915_WRITE_FW(PIPE_CHICKEN(pipe), @@ -5721,7 +5721,7 @@ static void haswell_crtc_enable(struct intel_crtc_state *pipe_config, if (dev_priv->display.initial_watermarks != NULL) dev_priv->display.initial_watermarks(old_intel_state, pipe_config); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) icl_pipe_mbus_enable(intel_crtc); /* XXX: Do the pipe assertions at the right place for BXT DSI. */ @@ -5850,14 +5850,14 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state, if (!transcoder_is_dsi(cpu_transcoder)) intel_ddi_disable_transcoder_func(old_crtc_state); - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) skylake_scaler_disable(intel_crtc); else ironlake_pfit_disable(old_crtc_state); intel_encoders_post_disable(crtc, old_crtc_state, old_state); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) icl_unmap_plls_to_ports(crtc, old_crtc_state, old_state); intel_encoders_post_pll_disable(crtc, old_crtc_state, old_state); @@ -6524,7 +6524,7 @@ static bool intel_crtc_supports_double_wide(const struct intel_crtc *crtc) const struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); /* GDG double wide on either pipe, otherwise pipe A only */ - return INTEL_GEN(dev_priv) < 4 && + return GT_GEN_RANGE(dev_priv, 0, 3) && (crtc->pipe == PIPE_A || IS_I915G(dev_priv)); } @@ -6584,7 +6584,7 @@ static int intel_crtc_compute_config(struct intel_crtc *crtc, const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode; int clock_limit = dev_priv->max_dotclk_freq; - if (INTEL_GEN(dev_priv) < 4) { + if (GT_GEN_RANGE(dev_priv, 0, 3)) { clock_limit = dev_priv->max_cdclk_freq * 9 / 10; /* @@ -6639,8 +6639,8 @@ static int intel_crtc_compute_config(struct intel_crtc *crtc, /* Cantiga+ cannot handle modes with a hsync front porch of 0. * WaPruneModeWithIncorrectHsyncOffset:ctg,elk,ilk,snb,ivb,vlv,hsw. */ - if ((INTEL_GEN(dev_priv) > 4 || IS_G4X(dev_priv)) && - adjusted_mode->crtc_hsync_start == adjusted_mode->crtc_hdisplay) + if ((GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER) || IS_G4X(dev_priv)) && + adjusted_mode->crtc_hsync_start == adjusted_mode->crtc_hdisplay) return -EINVAL; intel_crtc_compute_pixel_rate(pipe_config); @@ -6808,7 +6808,7 @@ static void intel_cpu_transcoder_set_m_n(const struct intel_crtc_state *crtc_sta enum pipe pipe = crtc->pipe; enum transcoder transcoder = crtc_state->cpu_transcoder; - if (INTEL_GEN(dev_priv) >= 5) { + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { I915_WRITE(PIPE_DATA_M1(transcoder), TU_SIZE(m_n->tu) | m_n->gmch_m); I915_WRITE(PIPE_DATA_N1(transcoder), m_n->gmch_n); I915_WRITE(PIPE_LINK_M1(transcoder), m_n->link_m); @@ -7202,7 +7202,7 @@ static void i9xx_compute_dpll(struct intel_crtc *crtc, dpll |= DPLLB_LVDS_P2_CLOCK_DIV_14; break; } - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) dpll |= (6 << PLL_LOAD_PULSE_PHASE_SHIFT); if (crtc_state->sdvo_tv_clock) @@ -7216,7 +7216,7 @@ static void i9xx_compute_dpll(struct intel_crtc *crtc, dpll |= DPLL_VCO_ENABLE; crtc_state->dpll_hw_state.dpll = dpll; - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { u32 dpll_md = (crtc_state->pixel_multiplier - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT; crtc_state->dpll_hw_state.dpll_md = dpll_md; @@ -7290,7 +7290,7 @@ static void intel_set_pipe_timings(const struct intel_crtc_state *crtc_state) vsyncshift += adjusted_mode->crtc_htotal; } - if (INTEL_GEN(dev_priv) > 3) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) I915_WRITE(VSYNCSHIFT(cpu_transcoder), vsyncshift); I915_WRITE(HTOTAL(cpu_transcoder), @@ -7450,7 +7450,7 @@ static void i9xx_set_pipeconf(const struct intel_crtc_state *crtc_state) } if (crtc_state->base.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE) { - if (INTEL_GEN(dev_priv) < 4 || + if (GT_GEN_RANGE(dev_priv, 0, 3) || intel_crtc_has_type(crtc_state, INTEL_OUTPUT_SDVO)) pipeconf |= PIPECONF_INTERLACE_W_FIELD_INDICATION; else @@ -7661,7 +7661,7 @@ static void i9xx_get_pfit_config(struct intel_crtc *crtc, struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); uint32_t tmp; - if (INTEL_GEN(dev_priv) <= 3 && + if (GT_GEN_RANGE(dev_priv, 0, 3) && (IS_I830(dev_priv) || !IS_MOBILE(dev_priv))) return; @@ -7670,7 +7670,7 @@ static void i9xx_get_pfit_config(struct intel_crtc *crtc, return; /* Check whether the pfit is attached to our pipe. */ - if (INTEL_GEN(dev_priv) < 4) { + if (GT_GEN_RANGE(dev_priv, 0, 3)) { if (crtc->pipe != PIPE_B) return; } else { @@ -7741,7 +7741,7 @@ i9xx_get_initial_plane_config(struct intel_crtc *crtc, val = I915_READ(DSPCNTR(i9xx_plane)); - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { if (val & DISPPLANE_TILED) { plane_config->tiling = I915_TILING_X; fb->modifier = I915_FORMAT_MOD_X_TILED; @@ -7755,7 +7755,7 @@ i9xx_get_initial_plane_config(struct intel_crtc *crtc, if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { offset = I915_READ(DSPOFFSET(i9xx_plane)); base = I915_READ(DSPSURF(i9xx_plane)) & 0xfffff000; - } else if (INTEL_GEN(dev_priv) >= 4) { + } else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { if (plane_config->tiling) offset = I915_READ(DSPTILEOFF(i9xx_plane)); else @@ -7827,7 +7827,7 @@ static void intel_get_crtc_ycbcr_config(struct intel_crtc *crtc, pipe_config->lspcon_downsampling = false; - if (IS_BROADWELL(dev_priv) || INTEL_GEN(dev_priv) >= 9) { + if (IS_BROADWELL(dev_priv) || GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { u32 tmp = I915_READ(PIPEMISC(crtc->pipe)); if (tmp & PIPEMISC_OUTPUT_COLORSPACE_YUV) { @@ -7839,7 +7839,7 @@ static void intel_get_crtc_ycbcr_config(struct intel_crtc *crtc, if (!blend) output = INTEL_OUTPUT_FORMAT_INVALID; else if (!(IS_GEMINILAKE(dev_priv) || - INTEL_GEN(dev_priv) >= 10)) + GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER))) output = INTEL_OUTPUT_FORMAT_INVALID; else output = INTEL_OUTPUT_FORMAT_YCBCR420; @@ -7905,7 +7905,7 @@ static bool i9xx_get_pipe_config(struct intel_crtc *crtc, (tmp & PIPECONF_COLOR_RANGE_SELECT)) pipe_config->limited_color_range = true; - if (INTEL_GEN(dev_priv) < 4) + if (GT_GEN_RANGE(dev_priv, 0, 3)) pipe_config->double_wide = tmp & PIPECONF_DOUBLE_WIDE; intel_get_pipe_timings(crtc, pipe_config); @@ -7913,7 +7913,7 @@ static bool i9xx_get_pipe_config(struct intel_crtc *crtc, i9xx_get_pfit_config(crtc, pipe_config); - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { /* No way to read it out on pipes B and C */ if (IS_CHERRYVIEW(dev_priv) && crtc->pipe != PIPE_A) tmp = dev_priv->chv_dpll_md[crtc->pipe]; @@ -8472,7 +8472,7 @@ static void haswell_set_pipemisc(const struct intel_crtc_state *crtc_state) struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc); struct drm_i915_private *dev_priv = to_i915(intel_crtc->base.dev); - if (IS_BROADWELL(dev_priv) || INTEL_GEN(dev_priv) >= 9) { + if (IS_BROADWELL(dev_priv) || GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { u32 val = 0; switch (crtc_state->pipe_bpp) { @@ -8705,7 +8705,7 @@ static void intel_cpu_transcoder_get_m_n(struct intel_crtc *crtc, struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); enum pipe pipe = crtc->pipe; - if (INTEL_GEN(dev_priv) >= 5) { + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { m_n->link_m = I915_READ(PIPE_LINK_M1(transcoder)); m_n->link_n = I915_READ(PIPE_LINK_N1(transcoder)); m_n->gmch_m = I915_READ(PIPE_DATA_M1(transcoder)) @@ -8814,12 +8814,12 @@ skylake_get_initial_plane_config(struct intel_crtc *crtc, val = I915_READ(PLANE_CTL(pipe, plane_id)); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) pixel_format = val & ICL_PLANE_CTL_FORMAT_MASK; else pixel_format = val & PLANE_CTL_FORMAT_MASK; - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) { alpha = I915_READ(PLANE_COLOR_CTL(pipe, plane_id)); alpha &= PLANE_COLOR_ALPHA_MASK; } else { @@ -9499,7 +9499,7 @@ static void haswell_get_ddi_port_state(struct intel_crtc *crtc, * DDI E. So just check whether this pipe is wired to DDI E and whether * the PCH transcoder is on. */ - if (INTEL_GEN(dev_priv) < 9 && + if (GT_GEN_RANGE(dev_priv, 0, 8) && (port == PORT_E) && I915_READ(LPT_TRANSCONF) & TRANS_ENABLE) { pipe_config->has_pch_encoder = true; @@ -9553,7 +9553,7 @@ static bool haswell_get_pipe_config(struct intel_crtc *crtc, power_domain = POWER_DOMAIN_PIPE_PANEL_FITTER(crtc->pipe); if (intel_display_power_get_if_enabled(dev_priv, power_domain)) { power_domain_mask |= BIT_ULL(power_domain); - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) skylake_get_pfit_config(crtc, pipe_config); else ironlake_get_pfit_config(crtc, pipe_config); @@ -9868,14 +9868,14 @@ static u32 i9xx_cursor_ctl(const struct intel_crtc_state *crtc_state, if (GT_GEN(dev_priv, 6) || IS_IVYBRIDGE(dev_priv)) cntl |= MCURSOR_TRICKLE_FEED_DISABLE; - if (INTEL_GEN(dev_priv) <= 10) { + if (GT_GEN_RANGE(dev_priv, 0, 10)) { cntl |= MCURSOR_GAMMA_ENABLE; if (HAS_DDI(dev_priv)) cntl |= MCURSOR_PIPE_CSC_ENABLE; } - if (INTEL_GEN(dev_priv) < 5 && !IS_G4X(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv)) cntl |= MCURSOR_PIPE_SELECT(crtc->pipe); switch (plane_state->base.crtc_w) { @@ -10080,7 +10080,7 @@ static bool i9xx_cursor_get_hw_state(struct intel_plane *plane, ret = val & MCURSOR_MODE; - if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER) || IS_G4X(dev_priv)) *pipe = plane->pipe; else *pipe = (val & MCURSOR_PIPE_SELECT_MASK) >> @@ -10580,7 +10580,7 @@ int intel_plane_atomic_calc_changes(const struct intel_crtc_state *old_crtc_stat struct drm_framebuffer *fb = plane_state->fb; int ret; - if (INTEL_GEN(dev_priv) >= 9 && plane->id != PLANE_CURSOR) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) && plane->id != PLANE_CURSOR) { ret = skl_update_scaler_plane( to_intel_crtc_state(crtc_state), to_intel_plane_state(plane_state)); @@ -10629,21 +10629,21 @@ int intel_plane_atomic_calc_changes(const struct intel_crtc_state *old_crtc_stat turn_off, turn_on, mode_changed); if (turn_on) { - if (INTEL_GEN(dev_priv) < 5 && !IS_G4X(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv)) pipe_config->update_wm_pre = true; /* must disable cxsr around plane enable/disable */ if (plane->id != PLANE_CURSOR) pipe_config->disable_cxsr = true; } else if (turn_off) { - if (INTEL_GEN(dev_priv) < 5 && !IS_G4X(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv)) pipe_config->update_wm_post = true; /* must disable cxsr around plane enable/disable */ if (plane->id != PLANE_CURSOR) pipe_config->disable_cxsr = true; } else if (intel_wm_need_update(&plane->base, plane_state)) { - if (INTEL_GEN(dev_priv) < 5 && !IS_G4X(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv)) { /* FIXME bollocks */ pipe_config->update_wm_pre = true; pipe_config->update_wm_post = true; @@ -10755,7 +10755,7 @@ static int icl_check_nv12_planes(struct intel_crtc_state *crtc_state) struct intel_plane_state *plane_state; int i; - if (INTEL_GEN(dev_priv) < 11) + if (GT_GEN_RANGE(dev_priv, 0, 10)) return 0; /* @@ -10878,11 +10878,11 @@ static int intel_crtc_atomic_check(struct drm_crtc *crtc, return ret; } } else if (dev_priv->display.compute_intermediate_wm) { - if (HAS_PCH_SPLIT(dev_priv) && INTEL_GEN(dev_priv) < 9) + if (HAS_PCH_SPLIT(dev_priv) && GT_GEN_RANGE(dev_priv, 0, 8)) pipe_config->wm.ilk.intermediate = pipe_config->wm.ilk.optimal; } - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { if (mode_changed) ret = skl_update_scaler_crtc(pipe_config); @@ -10978,7 +10978,7 @@ compute_baseline_pipe_bpp(struct intel_crtc *crtc, if ((IS_G4X(dev_priv) || IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))) bpp = 10*3; - else if (INTEL_GEN(dev_priv) >= 5) + else if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) bpp = 12*3; else bpp = 8*3; @@ -11134,7 +11134,7 @@ static void intel_dump_pipe_config(struct intel_crtc *crtc, pipe_config->pipe_src_w, pipe_config->pipe_src_h, pipe_config->pixel_rate); - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) DRM_DEBUG_KMS("num_scalers: %d, scaler_users: 0x%x, scaler_id: %d\n", crtc->num_scalers, pipe_config->scaler_state.scaler_users, @@ -11175,7 +11175,7 @@ static void intel_dump_pipe_config(struct intel_crtc *crtc, plane->base.id, plane->name, fb->base.id, fb->width, fb->height, drm_get_format_name(fb->format->format, &format_name)); - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) DRM_DEBUG_KMS("\tscaler:%d src %dx%d+%d+%d dst %dx%d+%d+%d\n", state->scaler_id, state->base.src.x1 >> 16, @@ -11664,7 +11664,7 @@ intel_pipe_config_compare(struct drm_i915_private *dev_priv, PIPE_CONF_CHECK_I(lane_count); PIPE_CONF_CHECK_X(lane_lat_optim_mask); - if (INTEL_GEN(dev_priv) < 8) { + if (GT_GEN_RANGE(dev_priv, 0, 7)) { PIPE_CONF_CHECK_M_N(dp_m_n); if (current_config->has_drrs) @@ -11691,7 +11691,7 @@ intel_pipe_config_compare(struct drm_i915_private *dev_priv, PIPE_CONF_CHECK_I(pixel_multiplier); PIPE_CONF_CHECK_I(output_format); PIPE_CONF_CHECK_BOOL(has_hdmi_sink); - if ((INTEL_GEN(dev_priv) < 8 && !IS_HASWELL(dev_priv)) || + if ((GT_GEN_RANGE(dev_priv, 0, 7) && !IS_HASWELL(dev_priv)) || IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) PIPE_CONF_CHECK_BOOL(limited_color_range); @@ -11717,7 +11717,7 @@ intel_pipe_config_compare(struct drm_i915_private *dev_priv, PIPE_CONF_CHECK_X(gmch_pfit.control); /* pfit ratios are autocomputed by the hw on gen4+ */ - if (INTEL_GEN(dev_priv) < 4) + if (GT_GEN_RANGE(dev_priv, 0, 3)) PIPE_CONF_CHECK_X(gmch_pfit.pgm_ratios); PIPE_CONF_CHECK_X(gmch_pfit.lvds_border_bits); @@ -11773,7 +11773,7 @@ intel_pipe_config_compare(struct drm_i915_private *dev_priv, PIPE_CONF_CHECK_X(dsi_pll.ctrl); PIPE_CONF_CHECK_X(dsi_pll.div); - if (IS_G4X(dev_priv) || INTEL_GEN(dev_priv) >= 5) + if (IS_G4X(dev_priv) || GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) PIPE_CONF_CHECK_I(pipe_bpp); PIPE_CONF_CHECK_CLOCK_FUZZY(base.adjusted_mode.crtc_clock); @@ -11823,7 +11823,7 @@ static void verify_wm_state(struct drm_crtc *crtc, const enum pipe pipe = intel_crtc->pipe; int plane, level, max_level = ilk_wm_max_level(dev_priv); - if (INTEL_GEN(dev_priv) < 9 || !new_state->active) + if (GT_GEN_RANGE(dev_priv, 0, 8) || !new_state->active) return; skl_pipe_wm_get_hw_state(crtc, &hw_wm); @@ -11832,7 +11832,7 @@ static void verify_wm_state(struct drm_crtc *crtc, skl_ddb_get_hw_state(dev_priv, &hw_ddb); sw_ddb = &dev_priv->wm.skl_hw.ddb; - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) if (hw_ddb.enabled_slices != sw_ddb->enabled_slices) DRM_ERROR("mismatch in DBUF Slices (expected %u, got %u)\n", sw_ddb->enabled_slices, @@ -12665,7 +12665,7 @@ static void skl_update_crtcs(struct drm_atomic_state *state) entries[i] = &to_intel_crtc_state(old_crtc_state)->wm.skl.ddb; /* If 2nd DBuf slice required, enable it here */ - if (INTEL_GEN(dev_priv) >= 11 && required_slices > hw_enabled_slices) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) && required_slices > hw_enabled_slices) icl_dbuf_slices_update(dev_priv, required_slices); /* @@ -12720,7 +12720,7 @@ static void skl_update_crtcs(struct drm_atomic_state *state) } while (progress); /* If 2nd DBuf slice is no more required disable it */ - if (INTEL_GEN(dev_priv) >= 11 && required_slices < hw_enabled_slices) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) && required_slices < hw_enabled_slices) icl_dbuf_slices_update(dev_priv, required_slices); } @@ -13034,7 +13034,7 @@ static int intel_atomic_commit(struct drm_device *dev, * FIXME doing watermarks and fb cleanup from a vblank worker * (assuming we had any) would solve these problems. */ - if (INTEL_GEN(dev_priv) < 9 && state->legacy_cursor_update) { + if (GT_GEN_RANGE(dev_priv, 0, 8) && state->legacy_cursor_update) { struct intel_crtc_state *new_crtc_state; struct intel_crtc *crtc; int i; @@ -13143,7 +13143,7 @@ static void add_rps_boost_after_vblank(struct drm_crtc *crtc, if (!dma_fence_is_i915(fence)) return; - if (INTEL_GEN(to_i915(crtc->dev)) < 6) + if (GT_GEN_RANGE(to_i915(crtc->dev), 0, 5)) return; if (drm_crtc_vblank_get(crtc)) @@ -13374,7 +13374,7 @@ skl_max_scale(const struct intel_crtc_state *crtc_state, crtc_clock = crtc_state->base.adjusted_mode.crtc_clock; max_dotclk = to_intel_atomic_state(crtc_state->base.state)->cdclk.logical.cdclk; - if (IS_GEMINILAKE(dev_priv) || INTEL_GEN(dev_priv) >= 10) + if (IS_GEMINILAKE(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) max_dotclk *= 2; if (WARN_ON_ONCE(!crtc_clock || max_dotclk < crtc_clock)) @@ -13423,7 +13423,7 @@ static void intel_begin_crtc_commit(struct drm_crtc *crtc, if (intel_cstate->update_pipe) intel_update_pipe_config(old_intel_cstate, intel_cstate); - else if (INTEL_GEN(dev_priv) >= 9) + else if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) skl_detach_scalers(intel_cstate); out: @@ -13711,7 +13711,7 @@ static bool i9xx_plane_has_fbc(struct drm_i915_private *dev_priv, else if (IS_IVYBRIDGE(dev_priv)) return i9xx_plane == PLANE_A || i9xx_plane == PLANE_B || i9xx_plane == PLANE_C; - else if (INTEL_GEN(dev_priv) >= 4) + else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) return i9xx_plane == PLANE_A || i9xx_plane == PLANE_B; else return i9xx_plane == PLANE_A; @@ -13729,7 +13729,7 @@ intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe) int num_formats; int ret; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) return skl_universal_plane_create(dev_priv, pipe, PLANE_PRIMARY); @@ -13742,7 +13742,7 @@ intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe) * On gen2/3 only plane A can do FBC, but the panel fitter and LVDS * port is hooked to pipe B. Hence we want plane A feeding pipe B. */ - if (HAS_FBC(dev_priv) && INTEL_GEN(dev_priv) < 4) + if (HAS_FBC(dev_priv) && GT_GEN_RANGE(dev_priv, 0, 3)) plane->i9xx_plane = (enum i9xx_plane_id) !pipe; else plane->i9xx_plane = (enum i9xx_plane_id) pipe; @@ -13756,7 +13756,7 @@ intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe) fbc->possible_framebuffer_bits |= plane->frontbuffer_bit; } - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { formats = i965_primary_formats; num_formats = ARRAY_SIZE(i965_primary_formats); modifiers = i9xx_format_modifiers; @@ -13784,7 +13784,7 @@ intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe) possible_crtcs = BIT(pipe); - if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER) || IS_G4X(dev_priv)) ret = drm_universal_plane_init(&dev_priv->drm, &plane->base, possible_crtcs, plane_funcs, formats, num_formats, modifiers, @@ -13804,14 +13804,14 @@ intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe) supported_rotations = DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_180 | DRM_MODE_REFLECT_X; - } else if (INTEL_GEN(dev_priv) >= 4) { + } else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { supported_rotations = DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_180; } else { supported_rotations = DRM_MODE_ROTATE_0; } - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) drm_plane_create_rotation_property(&plane->base, DRM_MODE_ROTATE_0, supported_rotations); @@ -13875,7 +13875,7 @@ intel_cursor_plane_create(struct drm_i915_private *dev_priv, if (ret) goto fail; - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) drm_plane_create_rotation_property(&cursor->base, DRM_MODE_ROTATE_0, DRM_MODE_ROTATE_0 | @@ -13975,7 +13975,7 @@ static int intel_crtc_init(struct drm_i915_private *dev_priv, enum pipe pipe) dev_priv->pipe_to_crtc_mapping[pipe] != NULL); dev_priv->pipe_to_crtc_mapping[pipe] = intel_crtc; - if (INTEL_GEN(dev_priv) < 9) { + if (GT_GEN_RANGE(dev_priv, 0, 8)) { enum i9xx_plane_id i9xx_plane = primary->i9xx_plane; BUG_ON(i9xx_plane >= ARRAY_SIZE(dev_priv->plane_to_crtc_mapping) || @@ -14052,7 +14052,7 @@ static bool has_edp_a(struct drm_i915_private *dev_priv) static bool intel_crt_present(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) return false; if (IS_HSW_ULT(dev_priv) || IS_BDW_ULT(dev_priv)) @@ -14436,7 +14436,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb, } /* fall through */ case I915_FORMAT_MOD_Y_TILED: - if (INTEL_GEN(dev_priv) < 9) { + if (GT_GEN_RANGE(dev_priv, 0, 8)) { DRM_DEBUG_KMS("Unsupported tiling 0x%llx!\n", mode_cmd->modifier[0]); goto err; @@ -14455,7 +14455,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb, * gen2/3 display engine uses the fence if present, * so the tiling mode must match the fb modifier exactly. */ - if (INTEL_GEN(dev_priv) < 4 && + if (GT_GEN_RANGE(dev_priv, 0, 3) && tiling != intel_fb_modifier_to_tiling(mode_cmd->modifier[0])) { DRM_DEBUG_KMS("tiling_mode must match fb modifier exactly on gen2/3\n"); goto err; @@ -14489,7 +14489,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb, case DRM_FORMAT_ARGB8888: break; case DRM_FORMAT_XRGB1555: - if (INTEL_GEN(dev_priv) > 3) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { DRM_DEBUG_KMS("unsupported pixel format: %s\n", drm_get_format_name(mode_cmd->pixel_format, &format_name)); goto err; @@ -14497,7 +14497,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb, break; case DRM_FORMAT_ABGR8888: if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv) && - INTEL_GEN(dev_priv) < 9) { + GT_GEN_RANGE(dev_priv, 0, 8)) { DRM_DEBUG_KMS("unsupported pixel format: %s\n", drm_get_format_name(mode_cmd->pixel_format, &format_name)); goto err; @@ -14506,7 +14506,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb, case DRM_FORMAT_XBGR8888: case DRM_FORMAT_XRGB2101010: case DRM_FORMAT_XBGR2101010: - if (INTEL_GEN(dev_priv) < 4) { + if (GT_GEN_RANGE(dev_priv, 0, 3)) { DRM_DEBUG_KMS("unsupported pixel format: %s\n", drm_get_format_name(mode_cmd->pixel_format, &format_name)); goto err; @@ -14523,14 +14523,14 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb, case DRM_FORMAT_UYVY: case DRM_FORMAT_YVYU: case DRM_FORMAT_VYUY: - if (INTEL_GEN(dev_priv) < 5 && !IS_G4X(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv)) { DRM_DEBUG_KMS("unsupported pixel format: %s\n", drm_get_format_name(mode_cmd->pixel_format, &format_name)); goto err; } break; case DRM_FORMAT_NV12: - if (INTEL_GEN(dev_priv) < 9 || IS_SKYLAKE(dev_priv) || + if (GT_GEN_RANGE(dev_priv, 0, 8) || IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) { DRM_DEBUG_KMS("unsupported pixel format: %s\n", drm_get_format_name(mode_cmd->pixel_format, @@ -14677,13 +14677,13 @@ intel_mode_valid(struct drm_device *dev, DRM_MODE_FLAG_CLKDIV2)) return MODE_BAD; - if (INTEL_GEN(dev_priv) >= 9 || + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) || IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) { hdisplay_max = 8192; /* FDI max 4096 handled elsewhere */ vdisplay_max = 4096; htotal_max = 8192; vtotal_max = 8192; - } else if (INTEL_GEN(dev_priv) >= 3) { + } else if (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER)) { hdisplay_max = 4096; vdisplay_max = 4096; htotal_max = 8192; @@ -14730,7 +14730,7 @@ void intel_init_display_hooks(struct drm_i915_private *dev_priv) { intel_init_cdclk_hooks(dev_priv); - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { dev_priv->display.get_pipe_config = haswell_get_pipe_config; dev_priv->display.get_initial_plane_config = skylake_get_initial_plane_config; @@ -14809,7 +14809,7 @@ void intel_init_display_hooks(struct drm_i915_private *dev_priv) dev_priv->display.fdi_link_train = hsw_fdi_link_train; } - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) dev_priv->display.update_crtcs = skl_update_crtcs; else dev_priv->display.update_crtcs = intel_update_crtcs; @@ -15237,7 +15237,7 @@ intel_sanitize_plane_mapping(struct drm_i915_private *dev_priv) { struct intel_crtc *crtc; - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) return; for_each_intel_crtc(&dev_priv->drm, crtc) { @@ -15398,7 +15398,7 @@ static void intel_sanitize_encoder(struct intel_encoder *encoder) /* notify opregion of the sanitized encoder state */ intel_opregion_notify_encoder(encoder, connector && has_active_crtc); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) icl_sanitize_encoder_pll_mapping(encoder); } @@ -15724,7 +15724,7 @@ intel_modeset_setup_hw_state(struct drm_device *dev, } else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { vlv_wm_get_hw_state(dev); vlv_wm_sanitize(dev_priv); - } else if (INTEL_GEN(dev_priv) >= 9) { + } else if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { skl_wm_get_hw_state(dev); } else if (HAS_PCH_SPLIT(dev_priv)) { ilk_wm_get_hw_state(dev); @@ -15842,7 +15842,7 @@ void intel_modeset_cleanup(struct drm_device *dev) */ int intel_modeset_vga_set_state(struct drm_i915_private *dev_priv, bool state) { - unsigned reg = INTEL_GEN(dev_priv) >= 6 ? SNB_GMCH_CTRL : INTEL_GMCH_CTRL; + unsigned reg = GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER) ? SNB_GMCH_CTRL : INTEL_GMCH_CTRL; u16 gmch_ctrl; if (pci_read_config_word(dev_priv->bridge_dev, reg, &gmch_ctrl)) { @@ -15947,13 +15947,13 @@ intel_display_capture_error_state(struct drm_i915_private *dev_priv) error->plane[i].control = I915_READ(DSPCNTR(i)); error->plane[i].stride = I915_READ(DSPSTRIDE(i)); - if (INTEL_GEN(dev_priv) <= 3) { + if (GT_GEN_RANGE(dev_priv, 0, 3)) { error->plane[i].size = I915_READ(DSPSIZE(i)); error->plane[i].pos = I915_READ(DSPPOS(i)); } - if (INTEL_GEN(dev_priv) <= 7 && !IS_HASWELL(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 7) && !IS_HASWELL(dev_priv)) error->plane[i].addr = I915_READ(DSPADDR(i)); - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { error->plane[i].surface = I915_READ(DSPSURF(i)); error->plane[i].tile_offset = I915_READ(DSPTILEOFF(i)); } @@ -16018,13 +16018,13 @@ intel_display_print_error_state(struct drm_i915_error_state_buf *m, err_printf(m, "Plane [%d]:\n", i); err_printf(m, " CNTR: %08x\n", error->plane[i].control); err_printf(m, " STRIDE: %08x\n", error->plane[i].stride); - if (INTEL_GEN(dev_priv) <= 3) { + if (GT_GEN_RANGE(dev_priv, 0, 3)) { err_printf(m, " SIZE: %08x\n", error->plane[i].size); err_printf(m, " POS: %08x\n", error->plane[i].pos); } - if (INTEL_GEN(dev_priv) <= 7 && !IS_HASWELL(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 7) && !IS_HASWELL(dev_priv)) err_printf(m, " ADDR: %08x\n", error->plane[i].addr); - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { err_printf(m, " SURF: %08x\n", error->plane[i].surface); err_printf(m, " TILEOFF: %08x\n", error->plane[i].tile_offset); } diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c index 7f6ceb00574f..a554c31ffb70 100644 --- a/drivers/gpu/drm/i915/intel_dp.c +++ b/drivers/gpu/drm/i915/intel_dp.c @@ -339,7 +339,7 @@ intel_dp_set_source_rates(struct intel_dp *intel_dp) /* This should only be done once */ WARN_ON(intel_dp->source_rates || intel_dp->num_source_rates); - if (INTEL_GEN(dev_priv) >= 10) { + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { source_rates = cnl_rates; size = ARRAY_SIZE(cnl_rates); if (GT_GEN(dev_priv, 10)) @@ -535,7 +535,7 @@ intel_dp_mode_valid(struct drm_connector *connector, * Output bpp is stored in 6.4 format so right shift by 4 to get the * integer value since we support only integer values of bpp. */ - if ((INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) && + if ((GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) && drm_dp_sink_supports_dsc(intel_dp->dsc_dpcd)) { if (intel_dp_is_edp(intel_dp)) { dsc_max_output_bpp = @@ -1549,7 +1549,7 @@ intel_dp_aux_init(struct intel_dp *intel_dp) struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); struct intel_encoder *encoder = &dig_port->base; - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { intel_dp->aux_ch_ctl_reg = skl_aux_ctl_reg; intel_dp->aux_ch_data_reg = skl_aux_data_reg; } else if (HAS_PCH_SPLIT(dev_priv)) { @@ -1560,7 +1560,7 @@ intel_dp_aux_init(struct intel_dp *intel_dp) intel_dp->aux_ch_data_reg = g4x_aux_data_reg; } - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) intel_dp->get_aux_clock_divider = skl_get_aux_clock_divider; else if (IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) intel_dp->get_aux_clock_divider = hsw_get_aux_clock_divider; @@ -1569,7 +1569,7 @@ intel_dp_aux_init(struct intel_dp *intel_dp) else intel_dp->get_aux_clock_divider = g4x_get_aux_clock_divider; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) intel_dp->get_aux_send_ctl = skl_get_aux_send_ctl; else intel_dp->get_aux_send_ctl = g4x_get_aux_send_ctl; @@ -1957,7 +1957,7 @@ intel_dp_compute_config(struct intel_encoder *encoder, intel_fixed_panel_mode(intel_connector->panel.fixed_mode, adjusted_mode); - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { int ret; ret = skl_update_scaler_crtc(pipe_config); @@ -3648,7 +3648,7 @@ intel_dp_set_signal_levels(struct intel_dp *intel_dp) uint32_t signal_levels, mask = 0; uint8_t train_set = intel_dp->train_set[0]; - if (GT_GEN9_LP(dev_priv) || INTEL_GEN(dev_priv) >= 10) { + if (GT_GEN9_LP(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { signal_levels = bxt_signal_levels(intel_dp); } else if (HAS_DDI(dev_priv)) { signal_levels = ddi_signal_levels(intel_dp); @@ -3926,7 +3926,7 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) intel_dp_set_common_rates(intel_dp); /* Read the eDP DSC DPCD registers */ - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) intel_dp_get_dsc_sink_cap(intel_dp); return true; @@ -5024,7 +5024,7 @@ bool intel_digital_port_connected(struct intel_encoder *encoder) return g4x_digital_port_connected(encoder); } - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) return icl_digital_port_connected(encoder); else if (GT_GEN(dev_priv, 10) || GT_GEN9_BC(dev_priv)) return spt_digital_port_connected(encoder); @@ -5142,7 +5142,7 @@ intel_dp_detect(struct drm_connector *connector, intel_dp_print_rates(intel_dp); /* Read DP Sink DSC Cap DPCD regs for DP v1.4 */ - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) intel_dp_get_dsc_sink_cap(intel_dp); drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc, @@ -5720,10 +5720,10 @@ bool intel_dp_is_port_edp(struct drm_i915_private *dev_priv, enum port port) * eDP not supported on g4x. so bail out early just * for a bit extra safety in case the VBT is bonkers. */ - if (INTEL_GEN(dev_priv) < 5) + if (GT_GEN_RANGE(dev_priv, 0, 4)) return false; - if (INTEL_GEN(dev_priv) < 9 && port == PORT_A) + if (GT_GEN_RANGE(dev_priv, 0, 8) && port == PORT_A) return true; return intel_bios_is_port_edp(dev_priv, port); @@ -5741,7 +5741,7 @@ intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connect intel_attach_broadcast_rgb_property(connector); if (HAS_GMCH_DISPLAY(dev_priv)) drm_connector_attach_max_bpc_property(connector, 6, 10); - else if (INTEL_GEN(dev_priv) >= 5) + else if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) drm_connector_attach_max_bpc_property(connector, 6, 12); if (intel_dp_is_edp(intel_dp)) { @@ -6096,7 +6096,7 @@ static void intel_dp_set_drrs_state(struct drm_i915_private *dev_priv, return; } - if (INTEL_GEN(dev_priv) >= 8 && !IS_CHERRYVIEW(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER) && !IS_CHERRYVIEW(dev_priv)) { switch (index) { case DRRS_HIGH_RR: intel_dp_set_m_n(crtc_state, M1_N1); @@ -6108,7 +6108,7 @@ static void intel_dp_set_drrs_state(struct drm_i915_private *dev_priv, default: DRM_ERROR("Unsupported refreshrate type\n"); } - } else if (INTEL_GEN(dev_priv) > 6) { + } else if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) { i915_reg_t reg = PIPECONF(crtc_state->cpu_transcoder); u32 val; @@ -6381,7 +6381,7 @@ intel_dp_drrs_init(struct intel_connector *connector, INIT_DELAYED_WORK(&dev_priv->drrs.work, intel_edp_drrs_downclock_work); mutex_init(&dev_priv->drrs.mutex); - if (INTEL_GEN(dev_priv) <= 6) { + if (GT_GEN_RANGE(dev_priv, 0, 6)) { DRM_DEBUG_KMS("DRRS supported for Gen7 and above\n"); return NULL; } diff --git a/drivers/gpu/drm/i915/intel_dpll_mgr.c b/drivers/gpu/drm/i915/intel_dpll_mgr.c index 5d756fdd1e7e..7a6cf48471b4 100644 --- a/drivers/gpu/drm/i915/intel_dpll_mgr.c +++ b/drivers/gpu/drm/i915/intel_dpll_mgr.c @@ -211,7 +211,7 @@ void intel_disable_shared_dpll(const struct intel_crtc_state *crtc_state) unsigned int crtc_mask = drm_crtc_mask(&crtc->base); /* PCH only available on ILK+ */ - if (INTEL_GEN(dev_priv) < 5) + if (GT_GEN_RANGE(dev_priv, 0, 4)) return; if (pll == NULL) @@ -1872,7 +1872,7 @@ static void intel_ddi_pll_init(struct drm_device *dev) { struct drm_i915_private *dev_priv = to_i915(dev); - if (INTEL_GEN(dev_priv) < 9) { + if (GT_GEN_RANGE(dev_priv, 0, 8)) { uint32_t val = I915_READ(LCPLL_CTL); /* @@ -2213,7 +2213,7 @@ int cnl_hdmi_pll_ref_clock(struct drm_i915_private *dev_priv) * For ICL+, the spec states: if reference frequency is 38.4, * use 19.2 because the DPLL automatically divides that by 2. */ - if (INTEL_GEN(dev_priv) >= 11 && ref_clock == 38400) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) && ref_clock == 38400) ref_clock = 19200; return ref_clock; diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index 6772e9974751..418ceb9a5e82 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -2237,7 +2237,7 @@ static inline bool icl_is_nv12_y_plane(enum plane_id id) static inline bool icl_is_hdr_plane(struct intel_plane *plane) { - if (INTEL_GEN(to_i915(plane->base.dev)) < 11) + if (GT_GEN_RANGE(to_i915(plane->base.dev), 0, 10)) return false; return plane->id < PLANE_SPRITE2; diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c index 4032a635acb5..fdd6966e5b95 100644 --- a/drivers/gpu/drm/i915/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/intel_engine_cs.c @@ -233,7 +233,7 @@ __intel_engine_context_size(struct drm_i915_private *dev_priv, u8 class) case VIDEO_DECODE_CLASS: case VIDEO_ENHANCEMENT_CLASS: case COPY_ENGINE_CLASS: - if (INTEL_GEN(dev_priv) < 8) + if (GT_GEN_RANGE(dev_priv, 0, 7)) return 0; return GEN8_LR_CONTEXT_OTHER_SIZE; } @@ -730,10 +730,10 @@ u64 intel_engine_get_active_head(const struct intel_engine_cs *engine) struct drm_i915_private *dev_priv = engine->i915; u64 acthd; - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) acthd = I915_READ64_2x32(RING_ACTHD(engine->mmio_base), RING_ACTHD_UDW(engine->mmio_base)); - else if (INTEL_GEN(dev_priv) >= 4) + else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) acthd = I915_READ(RING_ACTHD(engine->mmio_base)); else acthd = I915_READ(ACTHD); @@ -746,7 +746,7 @@ u64 intel_engine_get_last_batch_head(const struct intel_engine_cs *engine) struct drm_i915_private *dev_priv = engine->i915; u64 bbaddr; - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) bbaddr = I915_READ64_2x32(RING_BBADDR(engine->mmio_base), RING_BBADDR_UDW(engine->mmio_base)); else @@ -762,7 +762,7 @@ int intel_engine_stop_cs(struct intel_engine_cs *engine) const i915_reg_t mode = RING_MI_MODE(base); int err; - if (INTEL_GEN(dev_priv) < 3) + if (GT_GEN_RANGE(dev_priv, 0, 2)) return -ENODEV; GEM_TRACE("%s\n", engine->name); @@ -815,7 +815,7 @@ u32 intel_calculate_mcr_s_ss_select(struct drm_i915_private *dev_priv) if (GT_GEN(dev_priv, 10)) mcr_s_ss_select = GEN8_MCR_SLICE(slice) | GEN8_MCR_SUBSLICE(subslice); - else if (INTEL_GEN(dev_priv) >= 11) + else if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) mcr_s_ss_select = GEN11_MCR_SLICE(slice) | GEN11_MCR_SUBSLICE(subslice); else @@ -835,7 +835,7 @@ read_subslice_reg(struct drm_i915_private *dev_priv, int slice, uint32_t ret; enum forcewake_domains fw_domains; - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { mcr_slice_subslice_mask = GEN11_MCR_SLICE_MASK | GEN11_MCR_SUBSLICE_MASK; mcr_slice_subslice_select = GEN11_MCR_SLICE(slice) | @@ -950,7 +950,7 @@ static bool ring_is_idle(struct intel_engine_cs *engine) idle = false; /* No bit for gen2, so assume the CS parser is idle */ - if (INTEL_GEN(dev_priv) > 2 && !(I915_READ_MODE(engine) & MODE_IDLE)) + if (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER) && !(I915_READ_MODE(engine) & MODE_IDLE)) idle = false; intel_runtime_pm_put(dev_priv); @@ -1297,13 +1297,13 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine, drm_printf(m, "\tRING_CTL: 0x%08x%s\n", I915_READ(RING_CTL(engine->mmio_base)), I915_READ(RING_CTL(engine->mmio_base)) & (RING_WAIT | RING_WAIT_SEMAPHORE) ? " [waiting]" : ""); - if (INTEL_GEN(engine->i915) > 2) { + if (GT_GEN_RANGE(engine->i915, 3, GEN_FOREVER)) { drm_printf(m, "\tRING_MODE: 0x%08x%s\n", I915_READ(RING_MI_MODE(engine->mmio_base)), I915_READ(RING_MI_MODE(engine->mmio_base)) & (MODE_IDLE) ? " [idle]" : ""); } - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { drm_printf(m, "\tRING_IMR: %08x\n", I915_READ_IMR(engine)); } @@ -1323,16 +1323,16 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine, addr = intel_engine_get_last_batch_head(engine); drm_printf(m, "\tBBADDR: 0x%08x_%08x\n", upper_32_bits(addr), lower_32_bits(addr)); - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) addr = I915_READ64_2x32(RING_DMA_FADD(engine->mmio_base), RING_DMA_FADD_UDW(engine->mmio_base)); - else if (INTEL_GEN(dev_priv) >= 4) + else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) addr = I915_READ(RING_DMA_FADD(engine->mmio_base)); else addr = I915_READ(DMA_FADD_I8XX); drm_printf(m, "\tDMA_FADDR: 0x%08x_%08x\n", upper_32_bits(addr), lower_32_bits(addr)); - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { drm_printf(m, "\tIPEIR: 0x%08x\n", I915_READ(RING_IPEIR(engine->mmio_base))); drm_printf(m, "\tIPEHR: 0x%08x\n", @@ -1396,7 +1396,7 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine, } drm_printf(m, "\t\tHW active? 0x%x\n", execlists->active); rcu_read_unlock(); - } else if (INTEL_GEN(dev_priv) > 6) { + } else if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) { drm_printf(m, "\tPP_DIR_BASE: 0x%08x\n", I915_READ(RING_PP_DIR_BASE(engine))); drm_printf(m, "\tPP_DIR_BASE_READ: 0x%08x\n", diff --git a/drivers/gpu/drm/i915/intel_fbc.c b/drivers/gpu/drm/i915/intel_fbc.c index 7ed976ada979..9d0d6e440fa6 100644 --- a/drivers/gpu/drm/i915/intel_fbc.c +++ b/drivers/gpu/drm/i915/intel_fbc.c @@ -48,7 +48,7 @@ static inline bool fbc_supported(struct drm_i915_private *dev_priv) static inline bool no_fbc_on_multiple_pipes(struct drm_i915_private *dev_priv) { - return INTEL_GEN(dev_priv) <= 3; + return GT_GEN_RANGE(dev_priv, 0, 3); } /* @@ -86,7 +86,7 @@ static int intel_fbc_calculate_cfb_size(struct drm_i915_private *dev_priv, intel_fbc_get_plane_source_size(cache, NULL, &lines); if (GT_GEN(dev_priv, 7)) lines = min(lines, 2048); - else if (INTEL_GEN(dev_priv) >= 8) + else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) lines = min(lines, 2560); /* Hardware needs the full buffer stride, not just the active area. */ @@ -347,7 +347,7 @@ static void gen7_fbc_activate(struct drm_i915_private *dev_priv) static bool intel_fbc_hw_is_active(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) >= 5) + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) return ilk_fbc_is_active(dev_priv); else if (IS_GM45(dev_priv)) return g4x_fbc_is_active(dev_priv); @@ -361,9 +361,9 @@ static void intel_fbc_hw_activate(struct drm_i915_private *dev_priv) fbc->active = true; - if (INTEL_GEN(dev_priv) >= 7) + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) gen7_fbc_activate(dev_priv); - else if (INTEL_GEN(dev_priv) >= 5) + else if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) ilk_fbc_activate(dev_priv); else if (IS_GM45(dev_priv)) g4x_fbc_activate(dev_priv); @@ -377,7 +377,7 @@ static void intel_fbc_hw_deactivate(struct drm_i915_private *dev_priv) fbc->active = false; - if (INTEL_GEN(dev_priv) >= 5) + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) ilk_fbc_deactivate(dev_priv); else if (IS_GM45(dev_priv)) g4x_fbc_deactivate(dev_priv); @@ -470,7 +470,7 @@ static int find_compression_threshold(struct drm_i915_private *dev_priv, ret = i915_gem_stolen_insert_node_in_range(dev_priv, node, size >>= 1, 4096, 0, end); - if (ret && INTEL_GEN(dev_priv) <= 4) { + if (ret && GT_GEN_RANGE(dev_priv, 0, 4)) { return 0; } else if (ret) { compression_threshold <<= 1; @@ -503,7 +503,7 @@ static int intel_fbc_alloc_cfb(struct intel_crtc *crtc) fbc->threshold = ret; - if (INTEL_GEN(dev_priv) >= 5) + if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) I915_WRITE(ILK_DPFC_CB_BASE, fbc->compressed_fb.start); else if (IS_GM45(dev_priv)) { I915_WRITE(DPFC_CB_BASE, fbc->compressed_fb.start); @@ -626,10 +626,10 @@ static bool intel_fbc_hw_tracking_covers_screen(struct intel_crtc *crtc) struct intel_fbc *fbc = &dev_priv->fbc; unsigned int effective_w, effective_h, max_w, max_h; - if (INTEL_GEN(dev_priv) >= 8 || IS_HASWELL(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER) || IS_HASWELL(dev_priv)) { max_w = 4096; max_h = 4096; - } else if (IS_G4X(dev_priv) || INTEL_GEN(dev_priv) >= 5) { + } else if (IS_G4X(dev_priv) || GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { max_w = 4096; max_h = 2048; } else { @@ -734,7 +734,7 @@ static bool intel_fbc_can_activate(struct intel_crtc *crtc) fbc->no_fbc_reason = "framebuffer not tiled or fenced"; return false; } - if (INTEL_GEN(dev_priv) <= 4 && !IS_G4X(dev_priv) && + if (GT_GEN_RANGE(dev_priv, 0, 4) && !IS_G4X(dev_priv) && cache->plane.rotation != DRM_MODE_ROTATE_0) { fbc->no_fbc_reason = "rotation unsupported"; return false; @@ -1275,7 +1275,7 @@ static int intel_sanitize_fbc_option(struct drm_i915_private *dev_priv) if (!HAS_FBC(dev_priv)) return 0; - if (IS_BROADWELL(dev_priv) || INTEL_GEN(dev_priv) >= 9) + if (IS_BROADWELL(dev_priv) || GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) return 1; return 0; @@ -1321,7 +1321,7 @@ void intel_fbc_init(struct drm_i915_private *dev_priv) } /* This value was pulled out of someone's hat */ - if (INTEL_GEN(dev_priv) <= 4 && !IS_GM45(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 4) && !IS_GM45(dev_priv)) I915_WRITE(FBC_CONTROL, 500 << FBC_CTL_INTERVAL_SHIFT); /* We still don't have any sort of hardware state readout for FBC, so diff --git a/drivers/gpu/drm/i915/intel_fifo_underrun.c b/drivers/gpu/drm/i915/intel_fifo_underrun.c index 06f69bca0ff4..ff76da50153d 100644 --- a/drivers/gpu/drm/i915/intel_fifo_underrun.c +++ b/drivers/gpu/drm/i915/intel_fifo_underrun.c @@ -264,7 +264,7 @@ static bool __intel_set_cpu_fifo_underrun_reporting(struct drm_device *dev, ironlake_set_fifo_underrun_reporting(dev, pipe, enable); else if (GT_GEN(dev_priv, 7)) ivybridge_set_fifo_underrun_reporting(dev, pipe, enable, old); - else if (INTEL_GEN(dev_priv) >= 8) + else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) broadwell_set_fifo_underrun_reporting(dev, pipe, enable); return old; diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c index 84867ca2cc0c..64cfbc83ad8e 100644 --- a/drivers/gpu/drm/i915/intel_hangcheck.c +++ b/drivers/gpu/drm/i915/intel_hangcheck.c @@ -97,7 +97,7 @@ semaphore_waits_for(struct intel_engine_cs *engine, u32 *seqno) * ringbuffer itself. */ head = I915_READ_HEAD(engine) & HEAD_ADDR; - backwards = (INTEL_GEN(dev_priv) >= 8) ? 5 : 4; + backwards = (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) ? 5 : 4; vaddr = (void __iomem *)engine->buffer->vaddr; for (i = backwards; i; --i) { diff --git a/drivers/gpu/drm/i915/intel_hdcp.c b/drivers/gpu/drm/i915/intel_hdcp.c index 1bf487f94254..726bab95c004 100644 --- a/drivers/gpu/drm/i915/intel_hdcp.c +++ b/drivers/gpu/drm/i915/intel_hdcp.c @@ -768,7 +768,7 @@ static void intel_hdcp_prop_work(struct work_struct *work) bool is_hdcp_supported(struct drm_i915_private *dev_priv, enum port port) { /* PORT E doesn't have HDCP, and PORT F is disabled */ - return ((INTEL_GEN(dev_priv) >= 8 || IS_HASWELL(dev_priv)) && + return ((GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER) || IS_HASWELL(dev_priv)) && !IS_CHERRYVIEW(dev_priv) && port < PORT_E); } diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c index f954c2883f92..98527b905955 100644 --- a/drivers/gpu/drm/i915/intel_hdmi.c +++ b/drivers/gpu/drm/i915/intel_hdmi.c @@ -1476,11 +1476,11 @@ static int intel_hdmi_source_max_tmds_clock(struct intel_encoder *encoder) &dev_priv->vbt.ddi_port_info[encoder->port]; int max_tmds_clock; - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) max_tmds_clock = 594000; - else if (INTEL_GEN(dev_priv) >= 8 || IS_HASWELL(dev_priv)) + else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER) || IS_HASWELL(dev_priv)) max_tmds_clock = 300000; - else if (INTEL_GEN(dev_priv) >= 5) + else if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) max_tmds_clock = 225000; else max_tmds_clock = 165000; @@ -1579,7 +1579,7 @@ intel_hdmi_mode_valid(struct drm_connector *connector, true, force_dvi); /* if we can't do 8,12bpc we may still be able to do 10bpc */ - if (status != MODE_OK && INTEL_GEN(dev_priv) >= 11) + if (status != MODE_OK && GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) status = hdmi_port_clock_valid(hdmi, clock * 5 / 4, true, force_dvi); } @@ -1602,7 +1602,7 @@ static bool hdmi_deep_color_possible(const struct intel_crtc_state *crtc_state, if (HAS_GMCH_DISPLAY(dev_priv)) return false; - if (bpc == 10 && INTEL_GEN(dev_priv) < 11) + if (bpc == 10 && GT_GEN_RANGE(dev_priv, 0, 10)) return false; if (crtc_state->pipe_bpp <= 8*3) @@ -1797,7 +1797,7 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder, pipe_config->lane_count = 4; - if (scdc->scrambling.supported && (INTEL_GEN(dev_priv) >= 10 || + if (scdc->scrambling.supported && (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv))) { if (scdc->scrambling.low_rates) pipe_config->hdmi_scrambling = true; @@ -2399,7 +2399,7 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port, connector->doublescan_allowed = 0; connector->stereo_allowed = 1; - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) connector->ycbcr_420_allowed = true; intel_hdmi->ddc_bus = intel_hdmi_ddc_pin(dev_priv, port); diff --git a/drivers/gpu/drm/i915/intel_i2c.c b/drivers/gpu/drm/i915/intel_i2c.c index 86d898844a97..f85eaa57fa35 100644 --- a/drivers/gpu/drm/i915/intel_i2c.c +++ b/drivers/gpu/drm/i915/intel_i2c.c @@ -362,7 +362,7 @@ gmbus_wait_idle(struct drm_i915_private *dev_priv) static inline unsigned int gmbus_max_xfer_size(struct drm_i915_private *dev_priv) { - return INTEL_GEN(dev_priv) >= 9 ? GEN9_GMBUS_BYTE_COUNT_MAX : + return GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) ? GEN9_GMBUS_BYTE_COUNT_MAX : GMBUS_BYTE_COUNT_MAX; } diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index d8b53a0eac09..fd24c3939f46 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -239,7 +239,7 @@ intel_lr_context_descriptor_update(struct i915_gem_context *ctx, * Consider updating oa_get_render_ctx_id in i915_perf.c when changing * anything below. */ - if (INTEL_GEN(ctx->i915) >= 11) { + if (GT_GEN_RANGE(ctx->i915, 11, GEN_FOREVER)) { GEM_BUG_ON(ctx->hw_id >= BIT(GEN11_SW_CTX_ID_WIDTH)); desc |= (u64)ctx->hw_id << GEN11_SW_CTX_ID_SHIFT; /* bits 37-47 */ @@ -1587,7 +1587,7 @@ static void enable_execlists(struct intel_engine_cs *engine) * deeper FIFO it's not needed and it's not worth adding * more statements to the irq handler to support it. */ - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) I915_WRITE(RING_MODE_GEN7(engine), _MASKED_BIT_DISABLE(GEN11_GFX_DISABLE_LEGACY_MODE)); else @@ -2167,7 +2167,7 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine) engine->set_default_submission = intel_execlists_set_default_submission; - if (INTEL_GEN(engine->i915) < 11) { + if (GT_GEN_RANGE(engine->i915, 0, 10)) { engine->irq_enable = gen8_logical_ring_enable_irq; engine->irq_disable = gen8_logical_ring_disable_irq; } else { @@ -2186,7 +2186,7 @@ logical_ring_default_irqs(struct intel_engine_cs *engine) { unsigned int shift = 0; - if (INTEL_GEN(engine->i915) < 11) { + if (GT_GEN_RANGE(engine->i915, 0, 10)) { const u8 irq_shifts[] = { [RCS] = GEN8_RCS_IRQ_SHIFT, [BCS] = GEN8_BCS_IRQ_SHIFT, @@ -2286,7 +2286,7 @@ int logical_render_ring_init(struct intel_engine_cs *engine) engine->irq_keep_mask |= GT_RENDER_L3_PARITY_ERROR_INTERRUPT; /* Override some for render ring. */ - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) engine->init_hw = gen9_init_render_ring; else engine->init_hw = gen8_init_render_ring; @@ -2340,7 +2340,7 @@ make_rpcs(struct drm_i915_private *dev_priv) * No explicit RPCS request is needed to ensure full * slice/subslice/EU enablement prior to Gen9. */ - if (INTEL_GEN(dev_priv) < 9) + if (GT_GEN_RANGE(dev_priv, 0, 8)) return 0; /* @@ -2384,7 +2384,7 @@ make_rpcs(struct drm_i915_private *dev_priv) if (INTEL_INFO(dev_priv)->sseu.has_slice_pg) { u32 mask, val = slices; - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { mask = GEN11_RPCS_S_CNT_MASK; val <<= GEN11_RPCS_S_CNT_SHIFT; } else { @@ -2483,7 +2483,7 @@ static void execlists_init_reg_state(u32 *regs, CTX_REG(regs, CTX_CONTEXT_CONTROL, RING_CONTEXT_CONTROL(engine), _MASKED_BIT_DISABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT) | _MASKED_BIT_ENABLE(CTX_CTRL_INHIBIT_SYN_CTX_SWITCH)); - if (INTEL_GEN(dev_priv) < 11) { + if (GT_GEN_RANGE(dev_priv, 0, 10)) { regs[CTX_CONTEXT_CONTROL + 1] |= _MASKED_BIT_DISABLE(CTX_CTRL_ENGINE_CTX_SAVE_INHIBIT | CTX_CTRL_RS_CTX_ENABLE); @@ -2555,7 +2555,7 @@ static void execlists_init_reg_state(u32 *regs, } regs[CTX_END] = MI_BATCH_BUFFER_END; - if (INTEL_GEN(dev_priv) >= 10) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) regs[CTX_END] |= BIT(0); } @@ -2610,7 +2610,7 @@ populate_lr_context(struct i915_gem_context *ctx, if (!engine->default_state) regs[CTX_CONTEXT_CONTROL + 1] |= _MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT); - if (ctx == ctx->i915->preempt_context && INTEL_GEN(engine->i915) < 11) + if (ctx == ctx->i915->preempt_context && GT_GEN_RANGE(engine->i915, 0, 10)) regs[CTX_CONTEXT_CONTROL + 1] |= _MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT | CTX_CTRL_ENGINE_CTX_SAVE_INHIBIT); diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c index 79fa6b09a8db..968aa3a86350 100644 --- a/drivers/gpu/drm/i915/intel_lvds.c +++ b/drivers/gpu/drm/i915/intel_lvds.c @@ -129,12 +129,12 @@ static void intel_lvds_get_config(struct intel_encoder *encoder, pipe_config->base.adjusted_mode.flags |= flags; - if (INTEL_GEN(dev_priv) < 5) + if (GT_GEN_RANGE(dev_priv, 0, 4)) pipe_config->gmch_pfit.lvds_border_bits = tmp & LVDS_BORDER_ENABLE; /* gen2/3 store dither state in pfit control, needs to match */ - if (INTEL_GEN(dev_priv) < 4) { + if (GT_GEN_RANGE(dev_priv, 0, 3)) { tmp = I915_READ(PFIT_CONTROL); pipe_config->gmch_pfit.control |= tmp & PANEL_8TO6_DITHER_ENABLE; @@ -179,7 +179,7 @@ static void intel_lvds_pps_get_hw_state(struct drm_i915_private *dev_priv, /* Convert from 100ms to 100us units */ pps->t4 = val * 1000; - if (INTEL_GEN(dev_priv) <= 4 && + if (GT_GEN_RANGE(dev_priv, 0, 4) && pps->t1_t2 == 0 && pps->t5 == 0 && pps->t3 == 0 && pps->tx == 0) { DRM_DEBUG_KMS("Panel power timings uninitialized, " "setting defaults\n"); @@ -393,7 +393,7 @@ static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder, unsigned int lvds_bpp; /* Should never happen!! */ - if (INTEL_GEN(dev_priv) < 4 && intel_crtc->pipe == 0) { + if (GT_GEN_RANGE(dev_priv, 0, 3) && intel_crtc->pipe == 0) { DRM_ERROR("Can't support LVDS on pipe A\n"); return false; } @@ -810,7 +810,7 @@ static bool intel_lvds_supported(struct drm_i915_private *dev_priv) * Otherwise LVDS was only attached to mobile products, * except for the inglorious 830gm */ - if (INTEL_GEN(dev_priv) <= 4 && + if (GT_GEN_RANGE(dev_priv, 0, 4) && IS_MOBILE(dev_priv) && !IS_I830(dev_priv)) return true; diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c index 6b9076fd5836..e3168732f90f 100644 --- a/drivers/gpu/drm/i915/intel_mocs.c +++ b/drivers/gpu/drm/i915/intel_mocs.c @@ -188,7 +188,7 @@ static bool get_mocs_settings(struct drm_i915_private *dev_priv, table->table = broxton_mocs_table; result = true; } else { - WARN_ONCE(INTEL_GEN(dev_priv) >= 9, + WARN_ONCE(GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER), "Platform that should have a MOCS table does not.\n"); } diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c index c8eddf941762..66d4228968c9 100644 --- a/drivers/gpu/drm/i915/intel_overlay.c +++ b/drivers/gpu/drm/i915/intel_overlay.c @@ -897,7 +897,7 @@ static void update_pfit_vscale_ratio(struct intel_overlay *overlay) /* XXX: This is not the same logic as in the xorg driver, but more in * line with the intel documentation for the i965 */ - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { /* on i965 use the PGM reg to read out the autoscaler values */ ratio = I915_READ(PFIT_PGM_RATIOS) >> PFIT_VERT_SCALE_SHIFT_965; } else { diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c index 78d5b9da3a02..8de6d1dcfe82 100644 --- a/drivers/gpu/drm/i915/intel_panel.c +++ b/drivers/gpu/drm/i915/intel_panel.c @@ -326,7 +326,7 @@ void intel_gmch_panel_fitting(struct intel_crtc *intel_crtc, break; case DRM_MODE_SCALE_ASPECT: /* Scale but preserve the aspect ratio */ - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) i965_scale_aspect(pipe_config, &pfit_control); else i9xx_scale_aspect(pipe_config, &pfit_control, @@ -340,7 +340,7 @@ void intel_gmch_panel_fitting(struct intel_crtc *intel_crtc, if (pipe_config->pipe_src_h != adjusted_mode->crtc_vdisplay || pipe_config->pipe_src_w != adjusted_mode->crtc_hdisplay) { pfit_control |= PFIT_ENABLE; - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) pfit_control |= PFIT_SCALING_AUTO; else pfit_control |= (VERT_AUTO_SCALE | @@ -356,7 +356,7 @@ void intel_gmch_panel_fitting(struct intel_crtc *intel_crtc, /* 965+ wants fuzzy fitting */ /* FIXME: handle multiple panels by failing gracefully */ - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) pfit_control |= ((intel_crtc->pipe << PFIT_PIPE_SHIFT) | PFIT_FILTER_FUZZY); @@ -367,7 +367,7 @@ void intel_gmch_panel_fitting(struct intel_crtc *intel_crtc, } /* Make sure pre-965 set dither correctly for 18bpp panels. */ - if (INTEL_GEN(dev_priv) < 4 && pipe_config->pipe_bpp == 18) + if (GT_GEN_RANGE(dev_priv, 0, 3) && pipe_config->pipe_bpp == 18) pfit_control |= PANEL_8TO6_DITHER_ENABLE; pipe_config->gmch_pfit.control = pfit_control; @@ -481,7 +481,7 @@ static u32 i9xx_get_backlight(struct intel_connector *connector) u32 val; val = I915_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; - if (INTEL_GEN(dev_priv) < 4) + if (GT_GEN_RANGE(dev_priv, 0, 3)) val >>= 1; if (panel->backlight.combination_mode) { diff --git a/drivers/gpu/drm/i915/intel_pipe_crc.c b/drivers/gpu/drm/i915/intel_pipe_crc.c index a426978b233d..597e52bc15e8 100644 --- a/drivers/gpu/drm/i915/intel_pipe_crc.c +++ b/drivers/gpu/drm/i915/intel_pipe_crc.c @@ -429,7 +429,7 @@ static int get_new_crc_ctl_reg(struct drm_i915_private *dev_priv, { if (GT_GEN(dev_priv, 2)) return i8xx_pipe_crc_ctl_reg(source, val); - else if (INTEL_GEN(dev_priv) < 5) + else if (GT_GEN_RANGE(dev_priv, 0, 4)) return i9xx_pipe_crc_ctl_reg(dev_priv, pipe, source, val); else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) return vlv_pipe_crc_ctl_reg(dev_priv, pipe, source, val); @@ -546,7 +546,7 @@ intel_is_valid_crc_source(struct drm_i915_private *dev_priv, { if (GT_GEN(dev_priv, 2)) return i8xx_crc_source_valid(dev_priv, source); - else if (INTEL_GEN(dev_priv) < 5) + else if (GT_GEN_RANGE(dev_priv, 0, 4)) return i9xx_crc_source_valid(dev_priv, source); else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) return vlv_crc_source_valid(dev_priv, source); diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index 3a321600bb78..e721e3e80daf 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -2573,9 +2573,9 @@ static uint32_t ilk_compute_fbc_wm(const struct intel_crtc_state *cstate, static unsigned int ilk_display_fifo_size(const struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) return 3072; - else if (INTEL_GEN(dev_priv) >= 7) + else if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) return 768; else return 512; @@ -2585,10 +2585,10 @@ static unsigned int ilk_plane_wm_reg_max(const struct drm_i915_private *dev_priv, int level, bool is_sprite) { - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) /* BDW primary/sprite plane watermarks */ return level == 0 ? 255 : 2047; - else if (INTEL_GEN(dev_priv) >= 7) + else if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) /* IVB/HSW primary/sprite plane watermarks */ return level == 0 ? 127 : 1023; else if (!is_sprite) @@ -2602,7 +2602,7 @@ ilk_plane_wm_reg_max(const struct drm_i915_private *dev_priv, static unsigned int ilk_cursor_wm_reg_max(const struct drm_i915_private *dev_priv, int level) { - if (INTEL_GEN(dev_priv) >= 7) + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) return level == 0 ? 63 : 255; else return level == 0 ? 31 : 63; @@ -2610,7 +2610,7 @@ ilk_cursor_wm_reg_max(const struct drm_i915_private *dev_priv, int level) static unsigned int ilk_fbc_wm_reg_max(const struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) return 31; else return 15; @@ -2639,7 +2639,7 @@ static unsigned int ilk_plane_wm_max(const struct drm_device *dev, * FIFO size is only half of the self * refresh FIFO size on ILK/SNB. */ - if (INTEL_GEN(dev_priv) <= 6) + if (GT_GEN_RANGE(dev_priv, 0, 6)) fifo_size /= 2; } @@ -2800,7 +2800,7 @@ hsw_compute_linetime_wm(const struct intel_crtc_state *cstate) static void intel_read_wm_latency(struct drm_i915_private *dev_priv, uint16_t wm[8]) { - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { uint32_t val; int ret, i; int level, max_level = ilk_wm_max_level(dev_priv); @@ -2894,14 +2894,14 @@ static void intel_read_wm_latency(struct drm_i915_private *dev_priv, wm[2] = (sskpd >> 12) & 0xFF; wm[3] = (sskpd >> 20) & 0x1FF; wm[4] = (sskpd >> 32) & 0x1FF; - } else if (INTEL_GEN(dev_priv) >= 6) { + } else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { uint32_t sskpd = I915_READ(MCH_SSKPD); wm[0] = (sskpd >> SSKPD_WM0_SHIFT) & SSKPD_WM_MASK; wm[1] = (sskpd >> SSKPD_WM1_SHIFT) & SSKPD_WM_MASK; wm[2] = (sskpd >> SSKPD_WM2_SHIFT) & SSKPD_WM_MASK; wm[3] = (sskpd >> SSKPD_WM3_SHIFT) & SSKPD_WM_MASK; - } else if (INTEL_GEN(dev_priv) >= 5) { + } else if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { uint32_t mltr = I915_READ(MLTR_ILK); /* ILK primary LP0 latency is 700 ns */ @@ -2932,11 +2932,11 @@ static void intel_fixup_cur_wm_latency(struct drm_i915_private *dev_priv, int ilk_wm_max_level(const struct drm_i915_private *dev_priv) { /* how many WM levels are we expecting */ - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) return 7; else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) return 4; - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) return 3; else return 2; @@ -2961,7 +2961,7 @@ static void intel_print_wm_latency(struct drm_i915_private *dev_priv, * - latencies are in us on gen9. * - before then, WM1+ latency values are in 0.5us units */ - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) latency *= 10; else if (level > 0) latency *= 5; @@ -3097,7 +3097,7 @@ static int ilk_compute_pipe_wm(struct intel_crtc_state *cstate) usable_level = max_level; /* ILK/SNB: LP2+ watermarks only w/o sprites */ - if (INTEL_GEN(dev_priv) <= 6 && pipe_wm->sprites_enabled) + if (GT_GEN_RANGE(dev_priv, 0, 6) && pipe_wm->sprites_enabled) usable_level = 1; /* ILK/SNB/IVB: LP1+ watermarks only w/o scaling */ @@ -3242,12 +3242,12 @@ static void ilk_wm_merge(struct drm_device *dev, int last_enabled_level = max_level; /* ILK/SNB/IVB: LP1+ watermarks only w/ single pipe */ - if ((INTEL_GEN(dev_priv) <= 6 || IS_IVYBRIDGE(dev_priv)) && + if ((GT_GEN_RANGE(dev_priv, 0, 6) || IS_IVYBRIDGE(dev_priv)) && config->num_pipes_active > 1) last_enabled_level = 0; /* ILK: FBC WM must be disabled always */ - merged->fbc_wm_enabled = INTEL_GEN(dev_priv) >= 6; + merged->fbc_wm_enabled = GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER); /* merge each WM1+ level */ for (level = 1; level <= max_level; level++) { @@ -3337,7 +3337,7 @@ static void ilk_compute_wm_results(struct drm_device *dev, if (r->enable) results->wm_lp[wm_lp - 1] |= WM1_LP_SR_EN; - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) results->wm_lp[wm_lp - 1] |= r->fbc_val << WM1_LP_FBC_SHIFT_BDW; else @@ -3348,7 +3348,7 @@ static void ilk_compute_wm_results(struct drm_device *dev, * Always set WM1S_LP_EN when spr_val != 0, even if the * level is disabled. Doing otherwise could cause underruns. */ - if (INTEL_GEN(dev_priv) <= 6 && r->spr_val) { + if (GT_GEN_RANGE(dev_priv, 0, 6) && r->spr_val) { WARN_ON(wm_lp != 1); results->wm_lp_spr[wm_lp - 1] = WM1S_LP_EN | r->spr_val; } else @@ -3553,7 +3553,7 @@ static void ilk_write_wm_values(struct drm_i915_private *dev_priv, previous->wm_lp_spr[0] != results->wm_lp_spr[0]) I915_WRITE(WM1S_LP_ILK, results->wm_lp_spr[0]); - if (INTEL_GEN(dev_priv) >= 7) { + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) { if (dirty & WM_DIRTY_LP(2) && previous->wm_lp_spr[1] != results->wm_lp_spr[1]) I915_WRITE(WM2S_LP_IVB, results->wm_lp_spr[1]); if (dirty & WM_DIRTY_LP(3) && previous->wm_lp_spr[2] != results->wm_lp_spr[2]) @@ -3585,7 +3585,7 @@ static u8 intel_enabled_dbuf_slices_num(struct drm_i915_private *dev_priv) enabled_slices = 1; /* Gen prior to GEN11 have only one DBuf slice */ - if (INTEL_GEN(dev_priv) < 11) + if (GT_GEN_RANGE(dev_priv, 0, 10)) return enabled_slices; if (I915_READ(DBUF_CTL_S2) & DBUF_POWER_STATE) @@ -3611,7 +3611,7 @@ static bool skl_needs_memory_bw_wa(struct intel_atomic_state *state) static bool intel_has_sagv(struct drm_i915_private *dev_priv) { - return (GT_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) && + return (GT_GEN9_BC(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) && dev_priv->sagv_status != I915_SAGV_NOT_CONTROLLED; } @@ -3786,7 +3786,7 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv, WARN_ON(ddb_size == 0); - if (INTEL_GEN(dev_priv) < 11) + if (GT_GEN_RANGE(dev_priv, 0, 10)) return ddb_size - 4; /* 4 blocks for bypass path allocation */ adjusted_mode = &cstate->base.adjusted_mode; @@ -3896,7 +3896,7 @@ static void skl_ddb_entry_init_from_hw(struct drm_i915_private *dev_priv, { u16 mask; - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) mask = ICL_DDB_ENTRY_MASK; else mask = SKL_DDB_ENTRY_MASK; @@ -3936,7 +3936,7 @@ skl_ddb_get_hw_plane_state(struct drm_i915_private *dev_priv, val & PLANE_CTL_ALPHA_MASK); val = I915_READ(PLANE_BUF_CFG(pipe, plane_id)); - if (fourcc == DRM_FORMAT_NV12 && INTEL_GEN(dev_priv) < 11) { + if (fourcc == DRM_FORMAT_NV12 && GT_GEN_RANGE(dev_priv, 0, 10)) { val2 = I915_READ(PLANE_NV12_BUF_CFG(pipe, plane_id)); skl_ddb_entry_init_from_hw(dev_priv, @@ -4112,7 +4112,7 @@ int skl_check_pipe_max_pixel_rate(struct intel_crtc *intel_crtc, crtc_clock = crtc_state->adjusted_mode.crtc_clock; dotclk = to_intel_atomic_state(state)->cdclk.logical.cdclk; - if (IS_GEMINILAKE(dev_priv) || INTEL_GEN(dev_priv) >= 10) + if (IS_GEMINILAKE(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) dotclk *= 2; pipe_max_pixel_rate = div_round_up_u32_fixed16(dotclk, pipe_downscale); @@ -4394,7 +4394,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *cstate, return 0; } - if (INTEL_GEN(dev_priv) < 11) + if (GT_GEN_RANGE(dev_priv, 0, 10)) total_data_rate = skl_get_total_relative_data_rate(cstate, plane_data_rate, @@ -4476,7 +4476,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *cstate, uv_plane_blocks += div64_u64(alloc_size * uv_data_rate, total_data_rate); /* Gen11+ uses a separate plane for UV watermarks */ - WARN_ON(INTEL_GEN(dev_priv) >= 11 && uv_plane_blocks); + WARN_ON(GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) && uv_plane_blocks); if (uv_data_rate) { ddb->uv_plane[pipe][plane_id].start = start; @@ -4509,7 +4509,7 @@ skl_wm_method1(const struct drm_i915_private *dev_priv, uint32_t pixel_rate, wm_intermediate_val = latency * pixel_rate * cpp; ret = div_fixed16(wm_intermediate_val, 1000 * dbuf_block_size); - if (INTEL_GEN(dev_priv) >= 10) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) ret = add_fixed16_u32(ret, 1); return ret; @@ -4626,7 +4626,7 @@ skl_compute_plane_wm_params(const struct drm_i915_private *dev_priv, wp->plane_pixel_rate = skl_adjusted_plane_pixel_rate(cstate, intel_pstate); - if (INTEL_GEN(dev_priv) >= 11 && + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) && fb->modifier == I915_FORMAT_MOD_Yf_TILED && wp->cpp == 8) wp->dbuf_block_size = 256; else @@ -4661,7 +4661,7 @@ skl_compute_plane_wm_params(const struct drm_i915_private *dev_priv, wp->y_min_scanlines, wp->dbuf_block_size); - if (INTEL_GEN(dev_priv) >= 10) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) interm_pbpl++; wp->plane_blocks_per_line = div_fixed16(interm_pbpl, @@ -4778,7 +4778,7 @@ static int skl_compute_plane_wm(const struct drm_i915_private *dev_priv, res_blocks = result_prev->plane_res_b; } - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { if (wp->y_tiled) { uint32_t extra_lines; uint_fixed_16_16_t fp_min_disp_buf_needed; @@ -4910,7 +4910,7 @@ static void skl_compute_transition_wm(const struct intel_crtc_state *cstate, goto exit; /* Transition WM are not recommended by HW team for GEN9 */ - if (INTEL_GEN(dev_priv) <= 9) + if (GT_GEN_RANGE(dev_priv, 0, 9)) goto exit; /* Transition WM don't make any sense if ipc is disabled */ @@ -4918,7 +4918,7 @@ static void skl_compute_transition_wm(const struct intel_crtc_state *cstate, goto exit; trans_min = 14; - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) trans_min = 4; trans_offset_b = trans_min + trans_amount; @@ -5132,7 +5132,7 @@ static void skl_write_plane_wm(struct intel_crtc *intel_crtc, skl_write_wm_level(dev_priv, PLANE_WM_TRANS(pipe, plane_id), &wm->trans_wm); - if (wm->is_planar && INTEL_GEN(dev_priv) < 11) { + if (wm->is_planar && GT_GEN_RANGE(dev_priv, 0, 10)) { skl_ddb_entry_write(dev_priv, PLANE_BUF_CFG(pipe, plane_id), &ddb->uv_plane[pipe][plane_id]); skl_ddb_entry_write(dev_priv, @@ -5141,7 +5141,7 @@ static void skl_write_plane_wm(struct intel_crtc *intel_crtc, } else { skl_ddb_entry_write(dev_priv, PLANE_BUF_CFG(pipe, plane_id), &ddb->plane[pipe][plane_id]); - if (INTEL_GEN(dev_priv) < 11) + if (GT_GEN_RANGE(dev_priv, 0, 10)) I915_WRITE(PLANE_NV12_BUF_CFG(pipe, plane_id), 0x0); } } @@ -5573,7 +5573,7 @@ static void ilk_program_watermarks(struct drm_i915_private *dev_priv) ilk_wm_merge(dev, &config, &max, &lp_wm_1_2); /* 5/6 split only in single pipe config on IVB+ */ - if (INTEL_GEN(dev_priv) >= 7 && + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER) && config.num_pipes_active == 1 && config.sprites_enabled) { ilk_compute_wm_maximums(dev, 1, &config, INTEL_DDB_PART_5_6, &max); ilk_wm_merge(dev, &config, &max, &lp_wm_5_6); @@ -6176,7 +6176,7 @@ void ilk_wm_get_hw_state(struct drm_device *dev) hw->wm_lp[2] = I915_READ(WM3_LP_ILK); hw->wm_lp_spr[0] = I915_READ(WM1S_LP_ILK); - if (INTEL_GEN(dev_priv) >= 7) { + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) { hw->wm_lp_spr[1] = I915_READ(WM2S_LP_IVB); hw->wm_lp_spr[2] = I915_READ(WM3S_LP_IVB); } @@ -6406,7 +6406,7 @@ static u32 intel_rps_limits(struct drm_i915_private *dev_priv, u8 val) * the hw runs at the minimal clock before selecting the desired * frequency, if the down threshold expires in that window we will not * receive a down interrupt. */ - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { limits = (rps->max_freq_softlimit) << 23; if (val <= rps->min_freq_softlimit) limits |= (rps->min_freq_softlimit) << 14; @@ -6540,7 +6540,7 @@ void intel_rps_mark_interactive(struct drm_i915_private *i915, bool interactive) { struct intel_rps *rps = &i915->gt_pm.rps; - if (INTEL_GEN(i915) < 6) + if (GT_GEN_RANGE(i915, 0, 5)) return; mutex_lock(&rps->power.mutex); @@ -6583,7 +6583,7 @@ static int gen6_set_rps(struct drm_i915_private *dev_priv, u8 val) if (val != rps->cur_freq) { gen6_set_rps_thresholds(dev_priv, val); - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) I915_WRITE(GEN6_RPNSWREQ, GEN9_FREQUENCY(val)); else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) @@ -6934,7 +6934,7 @@ static void gen6_init_rps_frequencies(struct drm_i915_private *dev_priv) rps->efficient_freq = rps->rp1_freq; if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv) || - GT_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) { + GT_GEN9_BC(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { u32 ddcc_status = 0; if (sandybridge_pcode_read(dev_priv, @@ -6947,7 +6947,7 @@ static void gen6_init_rps_frequencies(struct drm_i915_private *dev_priv) rps->max_freq); } - if (GT_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) { + if (GT_GEN9_BC(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { /* Store the frequency values in 16.66 MHZ units, which is * the natural hardware unit for SKL */ @@ -7014,7 +7014,7 @@ static void gen9_enable_rc6(struct drm_i915_private *dev_priv) I915_WRITE(GEN6_RC_CONTROL, 0); /* 2b: Program RC6 thresholds.*/ - if (INTEL_GEN(dev_priv) >= 10) { + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { I915_WRITE(GEN6_RC6_WAKE_RATE_LIMIT, 54 << 16 | 85); I915_WRITE(GEN10_MEDIA_WAKE_RATE_LIMIT, 150); } else if (IS_SKYLAKE(dev_priv)) { @@ -7285,7 +7285,7 @@ static void gen6_update_ring_freq(struct drm_i915_private *dev_priv) min_gpu_freq = rps->min_freq; max_gpu_freq = rps->max_freq; - if (GT_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) { + if (GT_GEN9_BC(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { /* Convert GT frequency to 50 HZ units */ min_gpu_freq /= GEN9_FREQ_SCALER; max_gpu_freq /= GEN9_FREQ_SCALER; @@ -7300,13 +7300,13 @@ static void gen6_update_ring_freq(struct drm_i915_private *dev_priv) const int diff = max_gpu_freq - gpu_freq; unsigned int ia_freq = 0, ring_freq = 0; - if (GT_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) { + if (GT_GEN9_BC(dev_priv) || GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) { /* * ring_freq = 2 * GT. ring_freq is in 100MHz units * No floor required for ring frequency on SKL. */ ring_freq = gpu_freq; - } else if (INTEL_GEN(dev_priv) >= 8) { + } else if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { /* max(2 * GT, DDR). NB: GT is 50MHz units */ ring_freq = max(min_ring_freq, gpu_freq); } else if (IS_HASWELL(dev_priv)) { @@ -8323,7 +8323,7 @@ void intel_init_gt_powersave(struct drm_i915_private *dev_priv) cherryview_init_gt_powersave(dev_priv); else if (IS_VALLEYVIEW(dev_priv)) valleyview_init_gt_powersave(dev_priv); - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) gen6_init_rps_frequencies(dev_priv); /* Derive initial user preferences/limits from the hardware limits */ @@ -8378,7 +8378,7 @@ void intel_cleanup_gt_powersave(struct drm_i915_private *dev_priv) */ void intel_suspend_gt_powersave(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) < 6) + if (GT_GEN_RANGE(dev_priv, 0, 5)) return; /* gen6_rps_idle() will be called later to disable interrupts */ @@ -8390,9 +8390,9 @@ void intel_sanitize_gt_powersave(struct drm_i915_private *dev_priv) dev_priv->gt_pm.rc6.enabled = true; /* force RC6 disabling */ intel_disable_gt_powersave(dev_priv); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) gen11_reset_rps_interrupts(dev_priv); - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) gen6_reset_rps_interrupts(dev_priv); } @@ -8415,13 +8415,13 @@ static void intel_disable_rc6(struct drm_i915_private *dev_priv) if (!dev_priv->gt_pm.rc6.enabled) return; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) gen9_disable_rc6(dev_priv); else if (IS_CHERRYVIEW(dev_priv)) cherryview_disable_rc6(dev_priv); else if (IS_VALLEYVIEW(dev_priv)) valleyview_disable_rc6(dev_priv); - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) gen6_disable_rc6(dev_priv); dev_priv->gt_pm.rc6.enabled = false; @@ -8434,13 +8434,13 @@ static void intel_disable_rps(struct drm_i915_private *dev_priv) if (!dev_priv->gt_pm.rps.enabled) return; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) gen9_disable_rps(dev_priv); else if (IS_CHERRYVIEW(dev_priv)) cherryview_disable_rps(dev_priv); else if (IS_VALLEYVIEW(dev_priv)) valleyview_disable_rps(dev_priv); - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) gen6_disable_rps(dev_priv); else if (IS_IRONLAKE_M(dev_priv)) ironlake_disable_drps(dev_priv); @@ -8483,11 +8483,11 @@ static void intel_enable_rc6(struct drm_i915_private *dev_priv) cherryview_enable_rc6(dev_priv); else if (IS_VALLEYVIEW(dev_priv)) valleyview_enable_rc6(dev_priv); - else if (INTEL_GEN(dev_priv) >= 9) + else if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) gen9_enable_rc6(dev_priv); else if (IS_BROADWELL(dev_priv)) gen8_enable_rc6(dev_priv); - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) gen6_enable_rc6(dev_priv); dev_priv->gt_pm.rc6.enabled = true; @@ -8506,11 +8506,11 @@ static void intel_enable_rps(struct drm_i915_private *dev_priv) cherryview_enable_rps(dev_priv); } else if (IS_VALLEYVIEW(dev_priv)) { valleyview_enable_rps(dev_priv); - } else if (INTEL_GEN(dev_priv) >= 9) { + } else if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { gen9_enable_rps(dev_priv); } else if (IS_BROADWELL(dev_priv)) { gen8_enable_rps(dev_priv); - } else if (INTEL_GEN(dev_priv) >= 6) { + } else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { gen6_enable_rps(dev_priv); } else if (IS_IRONLAKE_M(dev_priv)) { ironlake_enable_drps(dev_priv); @@ -9444,7 +9444,7 @@ void intel_init_pm(struct drm_i915_private *dev_priv) i915_ironlake_get_mem_freq(dev_priv); /* For FIFO watermark updates */ - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { skl_setup_wm_latency(dev_priv); dev_priv->display.initial_watermarks = skl_initial_wm; dev_priv->display.atomic_update_watermarks = skl_atomic_update_crtc_wm; @@ -9590,7 +9590,7 @@ int sandybridge_pcode_read(struct drm_i915_private *dev_priv, u32 mbox, u32 *val *val = I915_READ_FW(GEN6_PCODE_DATA); I915_WRITE_FW(GEN6_PCODE_DATA, 0); - if (INTEL_GEN(dev_priv) > 6) + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) status = gen7_check_mailbox_status(dev_priv); else status = gen6_check_mailbox_status(dev_priv); @@ -9638,7 +9638,7 @@ int sandybridge_pcode_write_timeout(struct drm_i915_private *dev_priv, I915_WRITE_FW(GEN6_PCODE_DATA, 0); - if (INTEL_GEN(dev_priv) > 6) + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) status = gen7_check_mailbox_status(dev_priv); else status = gen6_check_mailbox_status(dev_priv); @@ -9767,7 +9767,7 @@ static int chv_freq_opcode(struct drm_i915_private *dev_priv, int val) int intel_gpu_freq(struct drm_i915_private *dev_priv, int val) { - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) return DIV_ROUND_CLOSEST(val * GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER); else if (IS_CHERRYVIEW(dev_priv)) @@ -9780,7 +9780,7 @@ int intel_gpu_freq(struct drm_i915_private *dev_priv, int val) int intel_freq_opcode(struct drm_i915_private *dev_priv, int val) { - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) return DIV_ROUND_CLOSEST(val * GEN9_FREQ_SCALER, GT_FREQUENCY_MULTIPLIER); else if (IS_CHERRYVIEW(dev_priv)) @@ -9926,7 +9926,7 @@ u32 intel_get_cagf(struct drm_i915_private *dev_priv, u32 rpstat) { u32 cagf; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) cagf = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT; else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) cagf = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT; diff --git a/drivers/gpu/drm/i915/intel_psr.c b/drivers/gpu/drm/i915/intel_psr.c index cacd54cc00e6..66e66a6e63f6 100644 --- a/drivers/gpu/drm/i915/intel_psr.c +++ b/drivers/gpu/drm/i915/intel_psr.c @@ -91,7 +91,7 @@ void intel_psr_irq_control(struct drm_i915_private *dev_priv, u32 debug) debug_mask = EDP_PSR_POST_EXIT(TRANSCODER_EDP) | EDP_PSR_PRE_ENTRY(TRANSCODER_EDP); - if (INTEL_GEN(dev_priv) >= 8) { + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) { mask |= EDP_PSR_ERROR(TRANSCODER_A) | EDP_PSR_ERROR(TRANSCODER_B) | EDP_PSR_ERROR(TRANSCODER_C); @@ -153,7 +153,7 @@ void intel_psr_irq_handler(struct drm_i915_private *dev_priv, u32 psr_iir) enum transcoder cpu_transcoder; ktime_t time_ns = ktime_get(); - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) transcoders |= BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | BIT(TRANSCODER_C); @@ -175,7 +175,7 @@ void intel_psr_irq_handler(struct drm_i915_private *dev_priv, u32 psr_iir) DRM_DEBUG_KMS("[transcoder %s] PSR exit completed\n", transcoder_name(cpu_transcoder)); - if (INTEL_GEN(dev_priv) >= 9) { + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { u32 val = I915_READ(PSR_EVENT(cpu_transcoder)); bool psr2_enabled = dev_priv->psr.psr2_enabled; @@ -242,7 +242,7 @@ void intel_psr_init_dpcd(struct intel_dp *intel_dp) WARN_ON(dev_priv->psr.dp); dev_priv->psr.dp = intel_dp; - if (INTEL_GEN(dev_priv) >= 9 && + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) && (intel_dp->psr_dpcd[0] == DP_PSR2_WITH_Y_COORD_IS_SUPPORTED)) { bool y_req = intel_dp->psr_dpcd[1] & DP_PSR2_SU_Y_COORDINATE_REQUIRED; @@ -350,7 +350,7 @@ static void intel_psr_enable_sink(struct intel_dp *intel_dp) if (dev_priv->psr.link_standby) dpcd_val |= DP_PSR_MAIN_LINK_ACTIVE; - if (!dev_priv->psr.psr2_enabled && INTEL_GEN(dev_priv) >= 8) + if (!dev_priv->psr.psr2_enabled && GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) dpcd_val |= DP_PSR_CRC_VERIFICATION; drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, dpcd_val); @@ -405,7 +405,7 @@ static void hsw_activate_psr1(struct intel_dp *intel_dp) else val |= EDP_PSR_TP1_TP2_SEL; - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) val |= EDP_PSR_CRC_ENABLE; val |= I915_READ(EDP_PSR_CTL) & EDP_PSR_RESTORE_PSR_ACTIVE_CTX_MASK; @@ -429,7 +429,7 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp) * mesh at all with our frontbuffer tracking. And the hw alone isn't * good enough. */ val |= EDP_PSR2_ENABLE | EDP_SU_TRACK_ENABLE; - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) val |= EDP_Y_COORDINATE_ENABLE; val |= EDP_PSR2_FRAME_BEFORE_SU(dev_priv->psr.sink_sync_latency + 1); @@ -463,7 +463,7 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp, if (!dev_priv->psr.sink_psr2_support) return false; - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) { psr_max_h = 4096; psr_max_v = 2304; } else if (GT_GEN(dev_priv, 9)) { @@ -543,7 +543,7 @@ static void intel_psr_activate(struct intel_dp *intel_dp) { struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) WARN_ON(I915_READ(EDP_PSR2_CTL) & EDP_PSR2_ENABLE); WARN_ON(I915_READ(EDP_PSR_CTL) & EDP_PSR_ENABLE); WARN_ON(dev_priv->psr.active); @@ -594,7 +594,7 @@ static void intel_psr_enable_source(struct intel_dp *intel_dp, EDP_PSR_DEBUG_MASK_LPSP | EDP_PSR_DEBUG_MASK_MAX_SLEEP; - if (INTEL_GEN(dev_priv) < 11) + if (GT_GEN_RANGE(dev_priv, 0, 10)) mask |= EDP_PSR_DEBUG_MASK_DISP_REG_WRITE; I915_WRITE(EDP_PSR_DEBUG, mask); @@ -1063,7 +1063,7 @@ void intel_psr_init(struct drm_i915_private *dev_priv) return; if (i915_modparams.enable_psr == -1) - if (INTEL_GEN(dev_priv) < 9 || !dev_priv->vbt.psr.enable) + if (GT_GEN_RANGE(dev_priv, 0, 8) || !dev_priv->vbt.psr.enable) i915_modparams.enable_psr = 0; /* Set link_standby x link_off defaults */ diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index e2907ae38b7f..c060ae613088 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -349,7 +349,7 @@ static void ring_setup_phys_status_page(struct intel_engine_cs *engine) u32 addr; addr = lower_32_bits(phys); - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) addr |= (phys >> 28) & 0xf0; I915_WRITE(HWS_PGA, addr); @@ -390,7 +390,7 @@ static void intel_ring_setup_status_page(struct intel_engine_cs *engine) mmio = RING_HWS_PGA(engine->mmio_base); } - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { u32 mask = ~0u; /* @@ -428,7 +428,7 @@ static bool stop_ring(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; - if (INTEL_GEN(dev_priv) > 2) { + if (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER)) { I915_WRITE_MODE(engine, _MASKED_BIT_ENABLE(STOP_RING)); if (intel_wait_for_register(dev_priv, RING_MI_MODE(engine->mmio_base), @@ -537,7 +537,7 @@ static int init_ring_common(struct intel_engine_cs *engine) goto out; } - if (INTEL_GEN(dev_priv) > 2) + if (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER)) I915_WRITE_MODE(engine, _MASKED_BIT_DISABLE(STOP_RING)); /* Papering over lost _interrupts_ immediately following the restart */ @@ -666,7 +666,7 @@ static int init_render_ring(struct intel_engine_cs *engine) if (GT_GEN_RANGE(dev_priv, 6, 7)) I915_WRITE(INSTPM, _MASKED_BIT_ENABLE(INSTPM_FORCE_ORDERING)); - if (INTEL_GEN(dev_priv) >= 6) + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) I915_WRITE_IMR(engine, ~engine->irq_keep_mask); return 0; @@ -1457,7 +1457,7 @@ void intel_engine_cleanup(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; - WARN_ON(INTEL_GEN(dev_priv) > 2 && + WARN_ON(GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER) && (I915_READ_MODE(engine) & MODE_IDLE) == 0); intel_ring_unpin(engine->buffer); @@ -2086,7 +2086,7 @@ static void intel_ring_init_semaphores(struct drm_i915_private *dev_priv, if (!HAS_LEGACY_SEMAPHORES(dev_priv)) return; - GEM_BUG_ON(INTEL_GEN(dev_priv) < 6); + GEM_BUG_ON(GT_GEN_RANGE(dev_priv, 0, 5)); engine->semaphore.sync_to = gen6_ring_sync_to; engine->semaphore.signal = gen6_signal; @@ -2141,15 +2141,15 @@ static void intel_ring_init_semaphores(struct drm_i915_private *dev_priv, static void intel_ring_init_irq(struct drm_i915_private *dev_priv, struct intel_engine_cs *engine) { - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { engine->irq_enable = gen6_irq_enable; engine->irq_disable = gen6_irq_disable; engine->irq_seqno_barrier = gen6_seqno_barrier; - } else if (INTEL_GEN(dev_priv) >= 5) { + } else if (GT_GEN_RANGE(dev_priv, 5, GEN_FOREVER)) { engine->irq_enable = gen5_irq_enable; engine->irq_disable = gen5_irq_disable; engine->irq_seqno_barrier = gen5_seqno_barrier; - } else if (INTEL_GEN(dev_priv) >= 3) { + } else if (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER)) { engine->irq_enable = i9xx_irq_enable; engine->irq_disable = i9xx_irq_disable; } else { @@ -2177,7 +2177,7 @@ static void intel_ring_default_vfuncs(struct drm_i915_private *dev_priv, struct intel_engine_cs *engine) { /* gen8+ are only supported with execlists */ - GEM_BUG_ON(INTEL_GEN(dev_priv) >= 8); + GEM_BUG_ON(GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)); intel_ring_init_irq(dev_priv, engine); intel_ring_init_semaphores(dev_priv, engine); @@ -2205,9 +2205,9 @@ static void intel_ring_default_vfuncs(struct drm_i915_private *dev_priv, engine->set_default_submission = i9xx_set_default_submission; - if (INTEL_GEN(dev_priv) >= 6) + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) engine->emit_bb_start = gen6_emit_bb_start; - else if (INTEL_GEN(dev_priv) >= 4) + else if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) engine->emit_bb_start = i965_emit_bb_start; else if (IS_I830(dev_priv) || IS_I845G(dev_priv)) engine->emit_bb_start = i830_emit_bb_start; @@ -2227,7 +2227,7 @@ int intel_init_render_ring_buffer(struct intel_engine_cs *engine) engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT; - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { engine->init_context = intel_rcs_ctx_init; engine->emit_flush = gen7_render_ring_flush; if (GT_GEN(dev_priv, 6)) @@ -2235,7 +2235,7 @@ int intel_init_render_ring_buffer(struct intel_engine_cs *engine) } else if (GT_GEN(dev_priv, 5)) { engine->emit_flush = gen4_render_ring_flush; } else { - if (INTEL_GEN(dev_priv) < 4) + if (GT_GEN_RANGE(dev_priv, 0, 3)) engine->emit_flush = gen2_render_ring_flush; else engine->emit_flush = gen4_render_ring_flush; @@ -2260,7 +2260,7 @@ int intel_init_bsd_ring_buffer(struct intel_engine_cs *engine) intel_ring_default_vfuncs(dev_priv, engine); - if (INTEL_GEN(dev_priv) >= 6) { + if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { /* gen6 bsd needs a special wa for tail updates */ if (GT_GEN(dev_priv, 6)) engine->set_default_submission = gen6_bsd_set_default_submission; diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c index 3b78afe0a790..f295d0df4b0d 100644 --- a/drivers/gpu/drm/i915/intel_runtime_pm.c +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c @@ -369,7 +369,7 @@ static void hsw_power_well_enable(struct drm_i915_private *dev_priv, u32 val; if (wait_fuses) { - pg = INTEL_GEN(dev_priv) >= 11 ? ICL_PW_CTL_IDX_TO_PG(pw_idx) : + pg = GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) ? ICL_PW_CTL_IDX_TO_PG(pw_idx) : SKL_PW_CTL_IDX_TO_PG(pw_idx); /* * For PW1 we have to wait both for the PW0/PG0 fuse state @@ -579,7 +579,7 @@ static u32 gen9_dc_mask(struct drm_i915_private *dev_priv) u32 mask; mask = DC_STATE_EN_UPTO_DC5; - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) mask |= DC_STATE_EN_UPTO_DC6 | DC_STATE_EN_DC9; else if (GT_GEN9_LP(dev_priv)) mask |= DC_STATE_EN_DC9; @@ -3019,7 +3019,7 @@ static uint32_t get_allowed_dc_mask(const struct drm_i915_private *dev_priv, int requested_dc; int max_dc; - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { max_dc = 2; /* * DC9 has a separate HW flow from the rest of the DC states, @@ -3220,7 +3220,7 @@ static void gen9_dbuf_disable(struct drm_i915_private *dev_priv) static u8 intel_dbuf_max_slices(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) < 11) + if (GT_GEN_RANGE(dev_priv, 0, 10)) return 1; return 2; } @@ -3826,7 +3826,7 @@ void intel_power_domains_init_hw(struct drm_i915_private *dev_priv, bool resume) mutex_lock(&power_domains->lock); vlv_cmnlane_wa(dev_priv); mutex_unlock(&power_domains->lock); - } else if (IS_IVYBRIDGE(dev_priv) || INTEL_GEN(dev_priv) >= 7) + } else if (IS_IVYBRIDGE(dev_priv) || GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) intel_pch_reset_handshake(dev_priv, !HAS_PCH_NOP(dev_priv)); /* diff --git a/drivers/gpu/drm/i915/intel_sdvo.c b/drivers/gpu/drm/i915/intel_sdvo.c index 5805ec1aba12..72212b8a4c06 100644 --- a/drivers/gpu/drm/i915/intel_sdvo.c +++ b/drivers/gpu/drm/i915/intel_sdvo.c @@ -1344,13 +1344,13 @@ static void intel_sdvo_pre_enable(struct intel_encoder *intel_encoder, return; /* Set the SDVO control regs. */ - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { /* The real mode polarity is set by the SDVO commands, using * struct intel_sdvo_dtd. */ sdvox = SDVO_VSYNC_ACTIVE_HIGH | SDVO_HSYNC_ACTIVE_HIGH; if (!HAS_PCH_SPLIT(dev_priv) && crtc_state->limited_color_range) sdvox |= HDMI_COLOR_RANGE_16_235; - if (INTEL_GEN(dev_priv) < 5) + if (GT_GEN_RANGE(dev_priv, 0, 4)) sdvox |= SDVO_BORDER_ENABLE; } else { sdvox = I915_READ(intel_sdvo->sdvo_reg); @@ -1367,11 +1367,11 @@ static void intel_sdvo_pre_enable(struct intel_encoder *intel_encoder, sdvox |= SDVO_PIPE_SEL(crtc->pipe); if (crtc_state->has_audio) { - WARN_ON_ONCE(INTEL_GEN(dev_priv) < 4); + WARN_ON_ONCE(GT_GEN_RANGE(dev_priv, 0, 3)); sdvox |= SDVO_AUDIO_ENABLE; } - if (INTEL_GEN(dev_priv) >= 4) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) { /* done in crtc_mode_set as the dpll_md reg must be written early */ } else if (IS_I945G(dev_priv) || IS_I945GM(dev_priv) || IS_G33(dev_priv) || IS_PINEVIEW(dev_priv)) { @@ -1382,7 +1382,7 @@ static void intel_sdvo_pre_enable(struct intel_encoder *intel_encoder, } if (input_dtd.part2.sdvo_flags & SDVO_NEED_TO_STALL && - INTEL_GEN(dev_priv) < 5) + GT_GEN_RANGE(dev_priv, 0, 4)) sdvox |= SDVO_STALL_SELECT; intel_sdvo_write_sdvox(intel_sdvo, sdvox); } @@ -2451,7 +2451,7 @@ intel_sdvo_add_hdmi_properties(struct intel_sdvo *intel_sdvo, struct drm_i915_private *dev_priv = to_i915(connector->base.base.dev); intel_attach_force_audio_property(&connector->base.base); - if (INTEL_GEN(dev_priv) >= 4 && IS_MOBILE(dev_priv)) { + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER) && IS_MOBILE(dev_priv)) { intel_attach_broadcast_rgb_property(&connector->base.base); } intel_attach_aspect_ratio_property(&connector->base.base); @@ -2521,7 +2521,7 @@ intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device) connector->connector_type = DRM_MODE_CONNECTOR_DVID; /* gen3 doesn't do the hdmi bits in the SDVO register */ - if (INTEL_GEN(dev_priv) >= 4 && + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER) && intel_sdvo_is_hdmi_connector(intel_sdvo, device)) { connector->connector_type = DRM_MODE_CONNECTOR_HDMIA; intel_sdvo_connector->is_hdmi = true; diff --git a/drivers/gpu/drm/i915/intel_sprite.c b/drivers/gpu/drm/i915/intel_sprite.c index 049e679e4145..4c0925615ecd 100644 --- a/drivers/gpu/drm/i915/intel_sprite.c +++ b/drivers/gpu/drm/i915/intel_sprite.c @@ -494,7 +494,7 @@ skl_program_plane(struct intel_plane *plane, spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) I915_WRITE_FW(PLANE_COLOR_CTL(pipe, plane_id), plane_state->color_ctl); @@ -522,7 +522,7 @@ skl_program_plane(struct intel_plane *plane, I915_WRITE_FW(PLANE_AUX_DIST(pipe, plane_id), (plane_state->color_plane[1].offset - surf_addr) | aux_stride); - if (INTEL_GEN(dev_priv) < 11) + if (GT_GEN_RANGE(dev_priv, 0, 10)) I915_WRITE_FW(PLANE_AUX_OFFSET(pipe, plane_id), (plane_state->color_plane[1].y << 16) | plane_state->color_plane[1].x); @@ -1314,7 +1314,7 @@ g4x_sprite_check(struct intel_crtc_state *crtc_state, int ret; if (intel_fb_scalable(plane_state->base.fb)) { - if (INTEL_GEN(dev_priv) < 7) { + if (GT_GEN_RANGE(dev_priv, 0, 6)) { min_scale = 1; max_scale = 16 << 16; } else if (IS_IVYBRIDGE(dev_priv)) { @@ -1345,7 +1345,7 @@ g4x_sprite_check(struct intel_crtc_state *crtc_state, if (ret) return ret; - if (INTEL_GEN(dev_priv) >= 7) + if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) plane_state->ctl = ivb_sprite_ctl(crtc_state, plane_state); else plane_state->ctl = g4x_sprite_ctl(crtc_state, plane_state); @@ -1444,7 +1444,7 @@ static int skl_plane_check_fb(const struct intel_crtc_state *crtc_state, */ switch (fb->format->format) { case DRM_FORMAT_RGB565: - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) break; /* fall through */ case DRM_FORMAT_C8: @@ -1570,7 +1570,7 @@ static int skl_plane_check(struct intel_crtc_state *crtc_state, plane_state->ctl = skl_plane_ctl(crtc_state, plane_state); - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) || IS_GEMINILAKE(dev_priv)) plane_state->color_ctl = glk_plane_color_ctl(crtc_state, plane_state); @@ -1579,7 +1579,7 @@ static int skl_plane_check(struct intel_crtc_state *crtc_state, static bool has_dst_key_in_primary_plane(struct drm_i915_private *dev_priv) { - return INTEL_GEN(dev_priv) >= 9; + return GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER); } static void intel_plane_set_ckey(struct intel_plane_state *plane_state, @@ -1603,7 +1603,7 @@ static void intel_plane_set_ckey(struct intel_plane_state *plane_state, * On SKL+ we want dst key enabled on * the primary and not on the sprite. */ - if (INTEL_GEN(dev_priv) >= 9 && plane->id != PLANE_PRIMARY && + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) && plane->id != PLANE_PRIMARY && set->flags & I915_SET_COLORKEY_DESTINATION) key->flags = 0; } @@ -1642,7 +1642,7 @@ int intel_sprite_set_colorkey_ioctl(struct drm_device *dev, void *data, * Also multiple planes can't do destination keying on the same * pipe simultaneously. */ - if (INTEL_GEN(dev_priv) >= 9 && + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) && to_intel_plane(plane)->id >= PLANE_SPRITE1 && set->flags & I915_SET_COLORKEY_DESTINATION) return -EINVAL; @@ -1972,7 +1972,7 @@ static bool skl_plane_has_fbc(struct drm_i915_private *dev_priv, static bool skl_plane_has_planar(struct drm_i915_private *dev_priv, enum pipe pipe, enum plane_id plane_id) { - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) return plane_id <= PLANE_SPRITE3; /* Display WA #0870: skl, bxt */ @@ -1994,7 +1994,7 @@ static bool skl_plane_has_ccs(struct drm_i915_private *dev_priv, if (plane_id == PLANE_CURSOR) return false; - if (INTEL_GEN(dev_priv) >= 10) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) return true; if (IS_GEMINILAKE(dev_priv)) @@ -2104,7 +2104,7 @@ skl_universal_plane_create(struct drm_i915_private *dev_priv, DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_180 | DRM_MODE_ROTATE_270; - if (INTEL_GEN(dev_priv) >= 10) + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER)) supported_rotations |= DRM_MODE_REFLECT_X; drm_plane_create_rotation_property(&plane->base, @@ -2148,7 +2148,7 @@ intel_sprite_plane_create(struct drm_i915_private *dev_priv, int num_formats; int ret; - if (INTEL_GEN(dev_priv) >= 9) + if (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) return skl_universal_plane_create(dev_priv, pipe, PLANE_SPRITE0 + sprite); @@ -2168,7 +2168,7 @@ intel_sprite_plane_create(struct drm_i915_private *dev_priv, modifiers = i9xx_plane_format_modifiers; plane_funcs = &vlv_sprite_funcs; - } else if (INTEL_GEN(dev_priv) >= 7) { + } else if (GT_GEN_RANGE(dev_priv, 7, GEN_FOREVER)) { plane->max_stride = g4x_sprite_max_stride; plane->update_plane = ivb_update_plane; plane->disable_plane = ivb_disable_plane; diff --git a/drivers/gpu/drm/i915/intel_tv.c b/drivers/gpu/drm/i915/intel_tv.c index 860f306a23ba..c4332fed6182 100644 --- a/drivers/gpu/drm/i915/intel_tv.c +++ b/drivers/gpu/drm/i915/intel_tv.c @@ -1069,7 +1069,7 @@ static void intel_tv_pre_enable(struct intel_encoder *encoder, set_color_conversion(dev_priv, color_conversion); - if (INTEL_GEN(dev_priv) >= 4) + if (GT_GEN_RANGE(dev_priv, 4, GEN_FOREVER)) I915_WRITE(TV_CLR_KNOBS, 0x00404000); else I915_WRITE(TV_CLR_KNOBS, 0x00606000); diff --git a/drivers/gpu/drm/i915/intel_uc.c b/drivers/gpu/drm/i915/intel_uc.c index 9eca84e7baa5..32ceb721b564 100644 --- a/drivers/gpu/drm/i915/intel_uc.c +++ b/drivers/gpu/drm/i915/intel_uc.c @@ -401,7 +401,7 @@ int intel_uc_init_hw(struct drm_i915_private *i915) ret = intel_guc_submission_enable(guc); if (ret) goto err_communication; - } else if (INTEL_GEN(i915) < 11) { + } else if (GT_GEN_RANGE(i915, 0, 10)) { ret = intel_guc_sample_forcewake(guc); if (ret) goto err_communication; diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c index 2e98416467a0..f488cf5d1f43 100644 --- a/drivers/gpu/drm/i915/intel_uncore.c +++ b/drivers/gpu/drm/i915/intel_uncore.c @@ -449,7 +449,7 @@ u64 intel_uncore_edram_size(struct drm_i915_private *dev_priv) /* The needed capability bits for size calculation * are not there with pre gen9 so return 128MB always. */ - if (INTEL_GEN(dev_priv) < 9) + if (GT_GEN_RANGE(dev_priv, 0, 8)) return 128 * 1024 * 1024; return gen9_edram_size(dev_priv); @@ -459,7 +459,7 @@ static void intel_uncore_edram_detect(struct drm_i915_private *dev_priv) { if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv) || - INTEL_GEN(dev_priv) >= 9) { + GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER)) { dev_priv->edram_cap = __raw_i915_read32(dev_priv, HSW_EDRAM_CAP); @@ -877,7 +877,7 @@ find_fw_domain(struct drm_i915_private *dev_priv, u32 offset) { .start = (s), .end = (e), .domains = (d) } #define HAS_FWTABLE(dev_priv) \ - (INTEL_GEN(dev_priv) >= 9 || \ + (GT_GEN_RANGE(dev_priv, 9, GEN_FOREVER) || \ IS_CHERRYVIEW(dev_priv) || \ IS_VALLEYVIEW(dev_priv)) @@ -1395,7 +1395,7 @@ static void fw_domain_fini(struct drm_i915_private *dev_priv, static void intel_uncore_fw_domains_init(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) <= 5 || intel_vgpu_active(dev_priv)) + if (GT_GEN_RANGE(dev_priv, 0, 5) || intel_vgpu_active(dev_priv)) return; if (GT_GEN(dev_priv, 6)) { @@ -1409,7 +1409,7 @@ static void intel_uncore_fw_domains_init(struct drm_i915_private *dev_priv) dev_priv->uncore.fw_clear = _MASKED_BIT_DISABLE(FORCEWAKE_KERNEL); } - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { int i; dev_priv->uncore.funcs.force_wake_get = @@ -1613,7 +1613,7 @@ void intel_uncore_init(struct drm_i915_private *dev_priv) */ void intel_uncore_prune(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { enum forcewake_domains fw_domains = dev_priv->uncore.fw_domains; enum forcewake_domain_id domain_id; int i; @@ -1744,7 +1744,7 @@ static void i915_stop_engines(struct drm_i915_private *dev_priv, struct intel_engine_cs *engine; enum intel_engine_id id; - if (INTEL_GEN(dev_priv) < 3) + if (GT_GEN_RANGE(dev_priv, 0, 2)) return; for_each_engine_masked(engine, dev_priv, engine_mask, id) @@ -2117,7 +2117,7 @@ static int reset_engines(struct drm_i915_private *i915, unsigned int engine_mask, unsigned int retry) { - if (INTEL_GEN(i915) >= 11) + if (GT_GEN_RANGE(i915, 11, GEN_FOREVER)) return gen11_reset_engines(i915, engine_mask); else return gen6_reset_engines(i915, engine_mask, retry); @@ -2169,9 +2169,9 @@ static reset_func intel_get_gpu_reset(struct drm_i915_private *dev_priv) if (!i915_modparams.reset) return NULL; - if (INTEL_GEN(dev_priv) >= 8) + if (GT_GEN_RANGE(dev_priv, 8, GEN_FOREVER)) return gen8_reset_engines; - else if (INTEL_GEN(dev_priv) >= 6) + else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) return gen6_reset_engines; else if (GT_GEN(dev_priv, 5)) return ironlake_do_reset; @@ -2179,7 +2179,7 @@ static reset_func intel_get_gpu_reset(struct drm_i915_private *dev_priv) return g4x_do_reset; else if (IS_G33(dev_priv) || IS_PINEVIEW(dev_priv)) return g33_do_reset; - else if (INTEL_GEN(dev_priv) >= 3) + else if (GT_GEN_RANGE(dev_priv, 3, GEN_FOREVER)) return i915_do_reset; else return NULL; @@ -2262,7 +2262,7 @@ bool intel_has_reset_engine(struct drm_i915_private *dev_priv) int intel_reset_guc(struct drm_i915_private *dev_priv) { - u32 guc_domain = INTEL_GEN(dev_priv) >= 11 ? GEN11_GRDOM_GUC : + u32 guc_domain = GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER) ? GEN11_GRDOM_GUC : GEN9_GRDOM_GUC; int ret; @@ -2314,11 +2314,11 @@ intel_uncore_forcewake_for_read(struct drm_i915_private *dev_priv, u32 offset = i915_mmio_reg_offset(reg); enum forcewake_domains fw_domains; - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { fw_domains = __gen11_fwtable_reg_read_fw_domains(offset); } else if (HAS_FWTABLE(dev_priv)) { fw_domains = __fwtable_reg_read_fw_domains(offset); - } else if (INTEL_GEN(dev_priv) >= 6) { + } else if (GT_GEN_RANGE(dev_priv, 6, GEN_FOREVER)) { fw_domains = __gen6_reg_read_fw_domains(offset); } else { WARN_ON(!GT_GEN_RANGE(dev_priv, 2, 5)); @@ -2337,7 +2337,7 @@ intel_uncore_forcewake_for_write(struct drm_i915_private *dev_priv, u32 offset = i915_mmio_reg_offset(reg); enum forcewake_domains fw_domains; - if (INTEL_GEN(dev_priv) >= 11) { + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) { fw_domains = __gen11_fwtable_reg_write_fw_domains(offset); } else if (HAS_FWTABLE(dev_priv) && !IS_VALLEYVIEW(dev_priv)) { fw_domains = __fwtable_reg_write_fw_domains(offset); diff --git a/drivers/gpu/drm/i915/intel_wopcm.c b/drivers/gpu/drm/i915/intel_wopcm.c index 0a5c68acf3dd..0811fe3885e2 100644 --- a/drivers/gpu/drm/i915/intel_wopcm.c +++ b/drivers/gpu/drm/i915/intel_wopcm.c @@ -80,7 +80,7 @@ static inline u32 context_reserved_size(struct drm_i915_private *i915) { if (GT_GEN9_LP(i915)) return BXT_WOPCM_RC6_CTX_RESERVED; - else if (INTEL_GEN(i915) >= 10) + else if (GT_GEN_RANGE(i915, 10, GEN_FOREVER)) return CNL_WOPCM_HW_CTX_RESERVED; else return 0; diff --git a/drivers/gpu/drm/i915/intel_workarounds.c b/drivers/gpu/drm/i915/intel_workarounds.c index db899cc5c981..1efaa5b5dbcc 100644 --- a/drivers/gpu/drm/i915/intel_workarounds.c +++ b/drivers/gpu/drm/i915/intel_workarounds.c @@ -517,7 +517,7 @@ int intel_ctx_workarounds_init(struct drm_i915_private *dev_priv) dev_priv->workarounds.count = 0; - if (INTEL_GEN(dev_priv) < 8) + if (GT_GEN_RANGE(dev_priv, 0, 7)) err = 0; else if (IS_BROADWELL(dev_priv)) err = bdw_ctx_workarounds_init(dev_priv); @@ -749,7 +749,7 @@ static void wa_init_mcr(struct drm_i915_private *dev_priv) * something more complex that requires checking the range of every * MMIO read). */ - if (INTEL_GEN(dev_priv) >= 10 && + if (GT_GEN_RANGE(dev_priv, 10, GEN_FOREVER) && is_power_of_2(sseu->slice_mask)) { /* * read FUSE3 for enabled L3 Bank IDs, if L3 Bank matches @@ -772,7 +772,7 @@ static void wa_init_mcr(struct drm_i915_private *dev_priv) mcr = I915_READ(GEN8_MCR_SELECTOR); - if (INTEL_GEN(dev_priv) >= 11) + if (GT_GEN_RANGE(dev_priv, 11, GEN_FOREVER)) mcr_slice_subslice_mask = GEN11_MCR_SLICE_MASK | GEN11_MCR_SUBSLICE_MASK; else @@ -916,7 +916,7 @@ static void icl_gt_workarounds_apply(struct drm_i915_private *dev_priv) void intel_gt_workarounds_apply(struct drm_i915_private *dev_priv) { - if (INTEL_GEN(dev_priv) < 8) + if (GT_GEN_RANGE(dev_priv, 0, 7)) return; else if (IS_BROADWELL(dev_priv)) bdw_gt_workarounds_apply(dev_priv); @@ -1033,7 +1033,7 @@ static struct whitelist *whitelist_build(struct intel_engine_cs *engine, w->count = 0; w->nopid = i915_mmio_reg_offset(RING_NOPID(engine->mmio_base)); - if (INTEL_GEN(i915) < 8) + if (GT_GEN_RANGE(i915, 0, 7)) return NULL; else if (IS_BROADWELL(i915)) bdw_whitelist_build(w); diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c index f7392c1ffe75..e4cfdaa5551b 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_coherency.c @@ -215,12 +215,12 @@ static int gpu_set(struct drm_i915_gem_object *obj, return PTR_ERR(cs); } - if (INTEL_GEN(i915) >= 8) { + if (GT_GEN_RANGE(i915, 8, GEN_FOREVER)) { *cs++ = MI_STORE_DWORD_IMM_GEN4 | 1 << 22; *cs++ = lower_32_bits(i915_ggtt_offset(vma) + offset); *cs++ = upper_32_bits(i915_ggtt_offset(vma) + offset); *cs++ = v; - } else if (INTEL_GEN(i915) >= 4) { + } else if (GT_GEN_RANGE(i915, 4, GEN_FOREVER)) { *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; *cs++ = 0; *cs++ = i915_ggtt_offset(vma) + offset; diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/selftests/i915_gem_context.c index 7d82043aff10..dd5424f8faa9 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_context.c @@ -379,7 +379,7 @@ static int gpu_fill(struct drm_i915_gem_object *obj, } flags = 0; - if (INTEL_GEN(vm->i915) <= 5) + if (GT_GEN_RANGE(vm->i915, 0, 5)) flags |= I915_DISPATCH_SECURE; err = engine->emit_bb_start(rq, @@ -799,7 +799,7 @@ static int write_to_scratch(struct i915_gem_context *ctx, } *cmd++ = MI_STORE_DWORD_IMM_GEN4; - if (INTEL_GEN(i915) >= 8) { + if (GT_GEN_RANGE(i915, 8, GEN_FOREVER)) { *cmd++ = lower_32_bits(offset); *cmd++ = upper_32_bits(offset); } else { @@ -887,7 +887,7 @@ static int read_from_scratch(struct i915_gem_context *ctx, } memset(cmd, POISON_INUSE, PAGE_SIZE); - if (INTEL_GEN(i915) >= 8) { + if (GT_GEN_RANGE(i915, 8, GEN_FOREVER)) { *cmd++ = MI_LOAD_REGISTER_MEM_GEN8; *cmd++ = RCS_GPR0; *cmd++ = lower_32_bits(offset); @@ -984,7 +984,7 @@ static int igt_vm_isolation(void *arg) u64 vm_total; int err; - if (INTEL_GEN(i915) < 7) + if (GT_GEN_RANGE(i915, 0, 6)) return 0; /* diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/selftests/i915_gem_object.c index c3999dd2021e..c35c7b59e7d2 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_object.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_object.c @@ -379,7 +379,7 @@ static int igt_partial_tiling(void *arg) tile.swizzle == I915_BIT_6_SWIZZLE_9_10_17) continue; - if (INTEL_GEN(i915) <= 2) { + if (GT_GEN_RANGE(i915, 0, 2)) { tile.height = 16; tile.width = 128; tile.size = 11; @@ -394,9 +394,9 @@ static int igt_partial_tiling(void *arg) tile.size = 12; } - if (INTEL_GEN(i915) < 4) + if (GT_GEN_RANGE(i915, 0, 3)) max_pitch = 8192 / tile.width; - else if (INTEL_GEN(i915) < 7) + else if (GT_GEN_RANGE(i915, 0, 6)) max_pitch = 128 * I965_FENCE_MAX_PITCH_VAL / tile.width; else max_pitch = 128 * GEN7_FENCE_MAX_PITCH_VAL / tile.width; @@ -409,7 +409,7 @@ static int igt_partial_tiling(void *arg) if (err) goto out_unlock; - if (pitch > 2 && INTEL_GEN(i915) >= 4) { + if (pitch > 2 && GT_GEN_RANGE(i915, 4, GEN_FOREVER)) { tile.stride = tile.width * (pitch - 1); err = check_partial_mapping(obj, &tile, end); if (err == -EINTR) @@ -418,7 +418,7 @@ static int igt_partial_tiling(void *arg) goto out_unlock; } - if (pitch < max_pitch && INTEL_GEN(i915) >= 4) { + if (pitch < max_pitch && GT_GEN_RANGE(i915, 4, GEN_FOREVER)) { tile.stride = tile.width * (pitch + 1); err = check_partial_mapping(obj, &tile, end); if (err == -EINTR) @@ -428,7 +428,7 @@ static int igt_partial_tiling(void *arg) } } - if (INTEL_GEN(i915) >= 4) { + if (GT_GEN_RANGE(i915, 4, GEN_FOREVER)) { for_each_prime_number(pitch, max_pitch) { tile.stride = tile.width * pitch; err = check_partial_mapping(obj, &tile, end); diff --git a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c index 51d0e2bed9e1..3c65059b60db 100644 --- a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c +++ b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c @@ -150,7 +150,7 @@ static int emit_recurse_batch(struct hang *h, } batch = h->batch; - if (INTEL_GEN(i915) >= 8) { + if (GT_GEN_RANGE(i915, 8, GEN_FOREVER)) { *batch++ = MI_STORE_DWORD_IMM_GEN4; *batch++ = lower_32_bits(hws_address(hws, rq)); *batch++ = upper_32_bits(hws_address(hws, rq)); @@ -164,7 +164,7 @@ static int emit_recurse_batch(struct hang *h, *batch++ = MI_BATCH_BUFFER_START | 1 << 8 | 1; *batch++ = lower_32_bits(vma->node.start); *batch++ = upper_32_bits(vma->node.start); - } else if (INTEL_GEN(i915) >= 6) { + } else if (GT_GEN_RANGE(i915, 6, GEN_FOREVER)) { *batch++ = MI_STORE_DWORD_IMM_GEN4; *batch++ = 0; *batch++ = lower_32_bits(hws_address(hws, rq)); @@ -177,7 +177,7 @@ static int emit_recurse_batch(struct hang *h, *batch++ = MI_ARB_CHECK; *batch++ = MI_BATCH_BUFFER_START | 1 << 8; *batch++ = lower_32_bits(vma->node.start); - } else if (INTEL_GEN(i915) >= 4) { + } else if (GT_GEN_RANGE(i915, 4, GEN_FOREVER)) { *batch++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; *batch++ = 0; *batch++ = lower_32_bits(hws_address(hws, rq)); @@ -207,7 +207,7 @@ static int emit_recurse_batch(struct hang *h, i915_gem_chipset_flush(h->i915); flags = 0; - if (INTEL_GEN(vm->i915) <= 5) + if (GT_GEN_RANGE(vm->i915, 0, 5)) flags |= I915_DISPATCH_SECURE; err = rq->engine->emit_bb_start(rq, vma->node.start, PAGE_SIZE, flags); diff --git a/drivers/gpu/drm/i915/selftests/intel_lrc.c b/drivers/gpu/drm/i915/selftests/intel_lrc.c index 94fc0e5c8766..a00a6931c50a 100644 --- a/drivers/gpu/drm/i915/selftests/intel_lrc.c +++ b/drivers/gpu/drm/i915/selftests/intel_lrc.c @@ -24,7 +24,7 @@ static int spinner_init(struct spinner *spin, struct drm_i915_private *i915) void *vaddr; int err; - GEM_BUG_ON(INTEL_GEN(i915) < 8); + GEM_BUG_ON(GT_GEN_RANGE(i915, 0, 7)); memset(spin, 0, sizeof(*spin)); spin->i915 = i915; diff --git a/drivers/gpu/drm/i915/selftests/intel_uncore.c b/drivers/gpu/drm/i915/selftests/intel_uncore.c index 81d9d31042a9..14406cca480a 100644 --- a/drivers/gpu/drm/i915/selftests/intel_uncore.c +++ b/drivers/gpu/drm/i915/selftests/intel_uncore.c @@ -184,7 +184,7 @@ int intel_uncore_live_selftests(struct drm_i915_private *i915) /* Confirm the table we load is still valid */ err = intel_fw_table_check(i915->uncore.fw_domains_table, i915->uncore.fw_domains_table_entries, - INTEL_GEN(i915) >= 9); + GT_GEN_RANGE(i915, 9, GEN_FOREVER)); if (err) return err; diff --git a/drivers/gpu/drm/i915/selftests/intel_workarounds.c b/drivers/gpu/drm/i915/selftests/intel_workarounds.c index d1a0923d2f38..72573893e78c 100644 --- a/drivers/gpu/drm/i915/selftests/intel_workarounds.c +++ b/drivers/gpu/drm/i915/selftests/intel_workarounds.c @@ -57,7 +57,7 @@ read_nonprivs(struct i915_gem_context *ctx, struct intel_engine_cs *engine) goto err_req; srm = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT; - if (INTEL_GEN(ctx->i915) >= 8) + if (GT_GEN_RANGE(ctx->i915, 8, GEN_FOREVER)) srm++; cs = intel_ring_begin(rq, 4 * RING_MAX_NONPRIV_SLOTS);