From patchwork Tue Nov 20 18:37:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Navare, Manasi" X-Patchwork-Id: 10691091 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B01014E2 for ; Tue, 20 Nov 2018 18:35:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8AEC02AC58 for ; Tue, 20 Nov 2018 18:35:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7F1B32AC5A; Tue, 20 Nov 2018 18:35:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D6ED82AC58 for ; Tue, 20 Nov 2018 18:35:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C00046E422; Tue, 20 Nov 2018 18:35:07 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id C5F9D6E419; Tue, 20 Nov 2018 18:34:59 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Nov 2018 10:34:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,258,1539673200"; d="scan'208";a="251262969" Received: from labuser-z97x-ud5h.jf.intel.com ([10.54.75.151]) by orsmga004.jf.intel.com with ESMTP; 20 Nov 2018 10:34:58 -0800 From: Manasi Navare To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH v10 08/23] drm/i915/dp: Compute DSC pipe config in atomic check Date: Tue, 20 Nov 2018 10:37:21 -0800 Message-Id: <20181120183736.28054-9-manasi.d.navare@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181120183736.28054-1-manasi.d.navare@intel.com> References: <20181120183736.28054-1-manasi.d.navare@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Manasi Navare , Anusha Srivatsa , Gaurav K Singh Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP DSC params like the enable, compressed bpp, slice count and dsc_split are added to the intel_crtc_state. These parameters are set based on the requested mode and available link parameters during the pipe configuration in atomic check phase. These values are then later used to populate the remaining DSC and RC parameters before enbaling DSC in atomic commit. v14: Remove leftovers, use dsc_bpc, refine dsc_compute_config (Ville) v13: * Compute DSC bpc only when DSC is req to be enabled (Ville) v12: * Override bpp with dsc dpcd color depth (Manasi) v11: * Const crtc_state, reject DSC on DP without FEC (Ville) * Dont set dsc_split to false (Ville) v10: * Add a helper for dp_dsc support (Ville) * Set pipe_config to max bpp, link params for DSC for now (Ville) * Compute bpp - use dp dsc support helper (Ville) v9: * Rebase on top of drm-tip that now uses fast_narrow config for edp (Manasi) v8: * Check for DSC bpc not 0 (manasi) v7: * Fix indentation in compute_m_n (Manasi) v6 (From Gaurav): * Remove function call of intel_dp_compute_dsc_params() and invoke intel_dp_compute_dsc_params() in the patch where it is defined to fix compilation warning (Gaurav) v5: Add drm_dsc_cfg in intel_crtc_state (Manasi) v4: * Rebase on refactoring of intel_dp_compute_config on tip (Manasi) * Add a comment why we need to check PSR while enabling DSC (Gaurav) v3: * Check PPR > max_cdclock to use 2 VDSC instances (Ville) v2: * Add if-else for eDP/DP (Gaurav) Cc: Jani Nikula Cc: Ville Syrjala Cc: Anusha Srivatsa Cc: Gaurav K Singh Signed-off-by: Manasi Navare Reviewed-by: Anusha Srivatsa Acked-by: Jani Nikula Reviewed-by: Ville Syrjälä --- drivers/gpu/drm/i915/intel_display.c | 2 +- drivers/gpu/drm/i915/intel_display.h | 2 +- drivers/gpu/drm/i915/intel_dp.c | 191 ++++++++++++++++++++++++--- 3 files changed, 171 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 132e978227fb..f2e6425d09ef 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -6723,7 +6723,7 @@ static void compute_m_n(unsigned int m, unsigned int n, } void -intel_link_compute_m_n(int bits_per_pixel, int nlanes, +intel_link_compute_m_n(u16 bits_per_pixel, int nlanes, int pixel_clock, int link_clock, struct intel_link_m_n *m_n, bool constant_n) diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h index 5d50decbcbb5..afb6435887df 100644 --- a/drivers/gpu/drm/i915/intel_display.h +++ b/drivers/gpu/drm/i915/intel_display.h @@ -407,7 +407,7 @@ struct intel_link_m_n { (__i)++) \ for_each_if(plane) -void intel_link_compute_m_n(int bpp, int nlanes, +void intel_link_compute_m_n(u16 bpp, int nlanes, int pixel_clock, int link_clock, struct intel_link_m_n *m_n, bool constant_n); diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c index 2b090609bee2..78ec775aa90a 100644 --- a/drivers/gpu/drm/i915/intel_dp.c +++ b/drivers/gpu/drm/i915/intel_dp.c @@ -47,6 +47,8 @@ /* DP DSC small joiner has 2 FIFOs each of 640 x 6 bytes */ #define DP_DSC_MAX_SMALL_JOINER_RAM_BUFFER 61440 +#define DP_DSC_MIN_SUPPORTED_BPC 8 +#define DP_DSC_MAX_SUPPORTED_BPC 10 /* DP DSC throughput values used for slice count calculations KPixels/s */ #define DP_DSC_PEAK_PIXEL_RATE 2720000 @@ -1708,6 +1710,26 @@ struct link_config_limits { int min_bpp, max_bpp; }; +static bool intel_dp_source_supports_dsc(struct intel_dp *intel_dp, + const struct intel_crtc_state *pipe_config) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + + /* FIXME: FEC needed for external DP until then reject DSC on DP */ + if (!intel_dp_is_edp(intel_dp)) + return false; + + return INTEL_GEN(dev_priv) >= 10 && + pipe_config->cpu_transcoder != TRANSCODER_A; +} + +static bool intel_dp_supports_dsc(struct intel_dp *intel_dp, + const struct intel_crtc_state *pipe_config) +{ + return intel_dp_source_supports_dsc(intel_dp, pipe_config) && + drm_dp_sink_supports_dsc(intel_dp->dsc_dpcd); +} + static int intel_dp_compute_bpp(struct intel_dp *intel_dp, struct intel_crtc_state *pipe_config) { @@ -1842,14 +1864,114 @@ intel_dp_compute_link_config_fast(struct intel_dp *intel_dp, return false; } +static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc) +{ + int i, num_bpc; + u8 dsc_bpc[3] = {0}; + + num_bpc = drm_dp_dsc_sink_supported_input_bpcs(intel_dp->dsc_dpcd, + dsc_bpc); + for (i = 0; i < num_bpc; i++) { + if (dsc_max_bpc >= dsc_bpc[i]) + return dsc_bpc[i] * 3; + } + + return 0; +} + +static bool intel_dp_dsc_compute_config(struct intel_dp *intel_dp, + struct intel_crtc_state *pipe_config, + struct drm_connector_state *conn_state, + struct link_config_limits *limits) +{ + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); + struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode; + u8 dsc_max_bpc; + u16 dsc_max_output_bpp = 0; + u8 dsc_dp_slice_count = 0; + int pipe_bpp; + + if (!intel_dp_supports_dsc(intel_dp, pipe_config)) + return false; + + dsc_max_bpc = min_t(u8, DP_DSC_MAX_SUPPORTED_BPC, + conn_state->max_requested_bpc); + + pipe_bpp = intel_dp_dsc_compute_bpp(intel_dp, dsc_max_bpc); + if (pipe_bpp < DP_DSC_MIN_SUPPORTED_BPC * 3) { + DRM_DEBUG_KMS("No DSC support for less than 8bpc\n"); + return false; + } + + /* + * For now enable DSC for max bpp, max link rate, max lane count. + * Optimize this later for the minimum possible link rate/lane count + * with DSC enabled for the requested mode. + */ + pipe_config->pipe_bpp = pipe_bpp; + pipe_config->port_clock = intel_dp->common_rates[limits->max_clock]; + pipe_config->lane_count = limits->max_lane_count; + + if (intel_dp_is_edp(intel_dp)) { + pipe_config->dsc_params.compressed_bpp = + min_t(u16, drm_edp_dsc_sink_output_bpp(intel_dp->dsc_dpcd) >> 4, + pipe_config->pipe_bpp); + pipe_config->dsc_params.slice_count = + drm_dp_dsc_sink_max_slice_count(intel_dp->dsc_dpcd, + true); + } else { + dsc_max_output_bpp = + intel_dp_dsc_get_output_bpp(pipe_config->port_clock, + pipe_config->lane_count, + adjusted_mode->crtc_clock, + adjusted_mode->crtc_hdisplay); + dsc_dp_slice_count = + intel_dp_dsc_get_slice_count(intel_dp, + adjusted_mode->crtc_clock, + adjusted_mode->crtc_hdisplay); + if (!(dsc_max_output_bpp && dsc_dp_slice_count)) { + DRM_DEBUG_KMS("Compressed BPP/Slice Count not supported\n"); + return false; + } + pipe_config->dsc_params.compressed_bpp = min_t(u16, + dsc_max_output_bpp >> 4, + pipe_config->pipe_bpp); + pipe_config->dsc_params.slice_count = dsc_dp_slice_count; + } + /* + * VDSC engine operates at 1 Pixel per clock, so if peak pixel rate + * is greater than the maximum Cdclock and if slice count is even + * then we need to use 2 VDSC instances. + */ + if (adjusted_mode->crtc_clock > dev_priv->max_cdclk_freq) { + if (pipe_config->dsc_params.slice_count > 1) { + pipe_config->dsc_params.dsc_split = true; + } else { + DRM_DEBUG_KMS("Cannot split stream to use 2 VDSC instances\n"); + return false; + } + } + pipe_config->dsc_params.compression_enable = true; + DRM_DEBUG_KMS("DP DSC computed with Input Bpp = %d " + "Compressed Bpp = %d Slice Count = %d\n", + pipe_config->pipe_bpp, + pipe_config->dsc_params.compressed_bpp, + pipe_config->dsc_params.slice_count); + + return true; +} + static bool intel_dp_compute_link_config(struct intel_encoder *encoder, - struct intel_crtc_state *pipe_config) + struct intel_crtc_state *pipe_config, + struct drm_connector_state *conn_state) { struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode; struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base); struct link_config_limits limits; int common_len; + bool ret; common_len = intel_dp_common_len_rate_limit(intel_dp, intel_dp->max_link_rate); @@ -1888,7 +2010,7 @@ intel_dp_compute_link_config(struct intel_encoder *encoder, intel_dp->common_rates[limits.max_clock], limits.max_bpp, adjusted_mode->crtc_clock); - if (intel_dp_is_edp(intel_dp)) { + if (intel_dp_is_edp(intel_dp)) /* * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4 * section A.1: "It is recommended that the minimum number of @@ -1898,26 +2020,42 @@ intel_dp_compute_link_config(struct intel_encoder *encoder, * Note that we use the max clock and lane count for eDP 1.3 and * earlier, and fast vs. wide is irrelevant. */ - if (!intel_dp_compute_link_config_fast(intel_dp, pipe_config, - &limits)) - return false; - } else { + ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config, + &limits); + else /* Optimize for slow and wide. */ - if (!intel_dp_compute_link_config_wide(intel_dp, pipe_config, - &limits)) + ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, + &limits); + + /* enable compression if the mode doesn't fit available BW */ + if (!ret) { + if (!intel_dp_dsc_compute_config(intel_dp, pipe_config, + conn_state, &limits)) return false; } - DRM_DEBUG_KMS("DP lane count %d clock %d bpp %d\n", - pipe_config->lane_count, pipe_config->port_clock, - pipe_config->pipe_bpp); + if (pipe_config->dsc_params.compression_enable) { + DRM_DEBUG_KMS("DP lane count %d clock %d Input bpp %d Compressed bpp %d\n", + pipe_config->lane_count, pipe_config->port_clock, + pipe_config->pipe_bpp, + pipe_config->dsc_params.compressed_bpp); - DRM_DEBUG_KMS("DP link rate required %i available %i\n", - intel_dp_link_required(adjusted_mode->crtc_clock, - pipe_config->pipe_bpp), - intel_dp_max_data_rate(pipe_config->port_clock, - pipe_config->lane_count)); + DRM_DEBUG_KMS("DP link rate required %i available %i\n", + intel_dp_link_required(adjusted_mode->crtc_clock, + pipe_config->dsc_params.compressed_bpp), + intel_dp_max_data_rate(pipe_config->port_clock, + pipe_config->lane_count)); + } else { + DRM_DEBUG_KMS("DP lane count %d clock %d bpp %d\n", + pipe_config->lane_count, pipe_config->port_clock, + pipe_config->pipe_bpp); + DRM_DEBUG_KMS("DP link rate required %i available %i\n", + intel_dp_link_required(adjusted_mode->crtc_clock, + pipe_config->pipe_bpp), + intel_dp_max_data_rate(pipe_config->port_clock, + pipe_config->lane_count)); + } return true; } @@ -1983,7 +2121,7 @@ intel_dp_compute_config(struct intel_encoder *encoder, if (adjusted_mode->flags & DRM_MODE_FLAG_DBLCLK) return false; - if (!intel_dp_compute_link_config(encoder, pipe_config)) + if (!intel_dp_compute_link_config(encoder, pipe_config, conn_state)) return false; if (intel_conn_state->broadcast_rgb == INTEL_BROADCAST_RGB_AUTO) { @@ -2001,11 +2139,20 @@ intel_dp_compute_config(struct intel_encoder *encoder, intel_conn_state->broadcast_rgb == INTEL_BROADCAST_RGB_LIMITED; } - intel_link_compute_m_n(pipe_config->pipe_bpp, pipe_config->lane_count, - adjusted_mode->crtc_clock, - pipe_config->port_clock, - &pipe_config->dp_m_n, - constant_n); + if (!pipe_config->dsc_params.compression_enable) + intel_link_compute_m_n(pipe_config->pipe_bpp, + pipe_config->lane_count, + adjusted_mode->crtc_clock, + pipe_config->port_clock, + &pipe_config->dp_m_n, + constant_n); + else + intel_link_compute_m_n(pipe_config->dsc_params.compression_enable, + pipe_config->lane_count, + adjusted_mode->crtc_clock, + pipe_config->port_clock, + &pipe_config->dp_m_n, + constant_n); if (intel_connector->panel.downclock_mode != NULL && dev_priv->drrs.type == SEAMLESS_DRRS_SUPPORT) {