From patchwork Wed Sep 12 00:55:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Navare, Manasi" X-Patchwork-Id: 10596571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB39F920 for ; Wed, 12 Sep 2018 00:54:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA0B82995F for ; Wed, 12 Sep 2018 00:54:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE683299F9; Wed, 12 Sep 2018 00:54:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B00B12995F for ; Wed, 12 Sep 2018 00:54:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0942B6E431; Wed, 12 Sep 2018 00:53:50 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 53E0B6E41D; Wed, 12 Sep 2018 00:53:48 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Sep 2018 17:53:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,362,1531810800"; d="scan'208";a="72219613" Received: from labuser-z97x-ud5h.jf.intel.com ([10.54.75.151]) by orsmga007.jf.intel.com with ESMTP; 11 Sep 2018 17:53:42 -0700 From: Manasi Navare To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH v4 13/25] drm/i915/dp: Compute DSC pipe config in atomic check Date: Tue, 11 Sep 2018 17:55:55 -0700 Message-Id: <20180912005607.29522-14-manasi.d.navare@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180912005607.29522-1-manasi.d.navare@intel.com> References: <20180912005607.29522-1-manasi.d.navare@intel.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Manasi Navare , Anusha Srivatsa , Gaurav K Singh MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP DSC params like the enable, compressed bpp, slice ocunt and dsc_split are added to the intel_crtc_state. These parameters are set based on the requested mode and available link parameters during the pipe configuration in atomic check phase. These values are then later used to populate the remaining DSC and RC parameters before enbaling DSC in atomic commit. v8: * Check for DSC bpc not 0 (manasi) v7: * Fix indentation in compute_m_n (Manasi) v6 (From Gaurav): * Remove function call of intel_dp_compute_dsc_params() and invoke intel_dp_compute_dsc_params() in the patch where it is defined to fix compilation warning (Gaurav) v5: Add drm_dsc_cfg in intel_crtc_state (Manasi) v4: * Rebase on refactoring of intel_dp_compute_config on tip (Manasi) * Add a comment why we need to check PSR while enabling DSC (Gaurav) v3: * Check PPR > max_cdclock to use 2 VDSC instances (Ville) v2: * Add if-else for eDP/DP (Gaurav) Cc: Jani Nikula Cc: Ville Syrjala Cc: Anusha Srivatsa Cc: Gaurav K Singh Signed-off-by: Manasi Navare --- drivers/gpu/drm/i915/intel_display.c | 20 ++-- drivers/gpu/drm/i915/intel_display.h | 3 +- drivers/gpu/drm/i915/intel_dp.c | 153 +++++++++++++++++++++++---- drivers/gpu/drm/i915/intel_dp_mst.c | 2 +- 4 files changed, 148 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 2b77d9350a3a..a7554132bfe0 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -6471,7 +6471,7 @@ static int ironlake_fdi_compute_config(struct intel_crtc *intel_crtc, pipe_config->fdi_lanes = lane; - intel_link_compute_m_n(pipe_config->pipe_bpp, lane, fdi_dotclock, + intel_link_compute_m_n(pipe_config->pipe_bpp, 0, lane, fdi_dotclock, link_bw, &pipe_config->fdi_m_n, false); ret = ironlake_check_fdi_lanes(dev, intel_crtc->pipe, pipe_config); @@ -6708,17 +6708,25 @@ static void compute_m_n(unsigned int m, unsigned int n, } void -intel_link_compute_m_n(int bits_per_pixel, int nlanes, +intel_link_compute_m_n(int bits_per_pixel, uint16_t compressed_bpp, + int nlanes, int pixel_clock, int link_clock, struct intel_link_m_n *m_n, bool reduce_m_n) { m_n->tu = 64; - compute_m_n(bits_per_pixel * pixel_clock, - link_clock * nlanes * 8, - &m_n->gmch_m, &m_n->gmch_n, - reduce_m_n); + /* For DSC, Data M/N calculation uses compressed BPP */ + if (compressed_bpp) + compute_m_n(compressed_bpp * pixel_clock, + link_clock * nlanes * 8, + &m_n->gmch_m, &m_n->gmch_n, + reduce_m_n); + else + compute_m_n(bits_per_pixel * pixel_clock, + link_clock * nlanes * 8, + &m_n->gmch_m, &m_n->gmch_n, + reduce_m_n); compute_m_n(pixel_clock, link_clock, &m_n->link_m, &m_n->link_n, diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h index ed474da6c200..3fe52788b4cf 100644 --- a/drivers/gpu/drm/i915/intel_display.h +++ b/drivers/gpu/drm/i915/intel_display.h @@ -402,7 +402,8 @@ struct intel_link_m_n { (__i)++) \ for_each_if(plane) -void intel_link_compute_m_n(int bpp, int nlanes, +void intel_link_compute_m_n(int bpp, uint16_t compressed_bpp, + int nlanes, int pixel_clock, int link_clock, struct intel_link_m_n *m_n, bool reduce_m_n); diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c index 63b7efa10a0f..ade25fbfcdd1 100644 --- a/drivers/gpu/drm/i915/intel_dp.c +++ b/drivers/gpu/drm/i915/intel_dp.c @@ -47,6 +47,8 @@ /* DP DSC small joiner has 2 FIFOs each of 640 x 6 bytes */ #define DP_DSC_MAX_SMALL_JOINER_RAM_BUFFER 61440 +#define DP_DSC_MIN_SUPPORTED_BPC 8 +#define DP_DSC_MAX_SUPPORTED_BPC 10 /* DP DSC throughput values used for slice count calculations KPixels/s */ #define DP_DSC_PEAK_PIXEL_RATE 2720000 @@ -1894,6 +1896,16 @@ static int intel_dp_compute_bpp(struct intel_dp *intel_dp, } } + /* If DSC is supported, use the max value reported by panel */ + if (INTEL_GEN(dev_priv) >= 10 && + drm_dp_sink_supports_dsc(intel_dp->dsc_dpcd)) { + bpc = min_t(u8, + drm_dp_dsc_sink_max_color_depth(intel_dp->dsc_dpcd), + DP_DSC_MAX_SUPPORTED_BPC); + if (bpc) + bpp = 3 * bpc; + } + return bpp; } @@ -1954,14 +1966,11 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp, link_clock = intel_dp->common_rates[clock]; link_avail = intel_dp_max_data_rate(link_clock, lane_count); - - if (mode_rate <= link_avail) { - pipe_config->lane_count = lane_count; - pipe_config->pipe_bpp = bpp; - pipe_config->port_clock = link_clock; - + pipe_config->lane_count = lane_count; + pipe_config->pipe_bpp = bpp; + pipe_config->port_clock = link_clock; + if (mode_rate <= link_avail) return true; - } } } } @@ -1969,10 +1978,83 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp, return false; } +static bool intel_dp_dsc_compute_config(struct intel_dp *intel_dp, + struct intel_crtc_state *pipe_config, + struct link_config_limits *limits) +{ + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); + struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode; + enum pipe pipe = to_intel_crtc(pipe_config->base.crtc)->pipe; + u16 dsc_max_output_bpp = 0; + u8 dsc_dp_slice_count = 0; + + if (INTEL_GEN(dev_priv) < 10 || + !drm_dp_sink_supports_dsc(intel_dp->dsc_dpcd)) + return false; + + /* DP DSC only supported on Pipe B and C */ + if (pipe == PIPE_A && !intel_dp_is_edp(intel_dp)) + return false; + + /* DSC not supported for DSC sink BPC < 8 */ + if (limits->max_bpp < 3 * DP_DSC_MIN_SUPPORTED_BPC) { + DRM_DEBUG_KMS("No DSC support for less than 8bpc\n"); + return false; + } + + if (intel_dp_is_edp(intel_dp)) { + pipe_config->dsc_params.compressed_bpp = + drm_edp_dsc_sink_output_bpp(intel_dp->dsc_dpcd) >> 4; + pipe_config->dsc_params.slice_count = + drm_dp_dsc_sink_max_slice_count(intel_dp->dsc_dpcd, + true); + } else { + dsc_max_output_bpp = + intel_dp_dsc_get_output_bpp(pipe_config->port_clock, + pipe_config->lane_count, + adjusted_mode->crtc_clock, + adjusted_mode->crtc_hdisplay); + dsc_dp_slice_count = + intel_dp_dsc_get_slice_count(intel_dp, + adjusted_mode->crtc_clock, + adjusted_mode->crtc_hdisplay); + if (!(dsc_max_output_bpp && dsc_dp_slice_count)) { + DRM_DEBUG_KMS("Compressed BPP/Slice Count not supported\n"); + return false; + } + pipe_config->dsc_params.compressed_bpp = dsc_max_output_bpp >> 4; + pipe_config->dsc_params.slice_count = dsc_dp_slice_count; + } + /* + * VDSC engine operates at 1 Pixel per clock, so if peak pixel rate + * is greater than the maximum Cdclock and if slice count is even + * then we need to use 2 VDSC instances. + */ + pipe_config->dsc_params.dsc_split = false; + if (adjusted_mode->crtc_clock > dev_priv->max_cdclk_freq) { + if (pipe_config->dsc_params.slice_count > 1) { + pipe_config->dsc_params.dsc_split = true; + } else { + DRM_DEBUG_KMS("Cannot split stream to use 2 VDSC instances\n"); + return false; + } + } + pipe_config->dsc_params.compression_enable = true; + DRM_DEBUG_KMS("DP DSC computed with Input Bpp = %d " + "Compressed Bpp = %d Slice Count = %d\n", + pipe_config->pipe_bpp, + pipe_config->dsc_params.compressed_bpp, + pipe_config->dsc_params.slice_count); + + return true; +} + static bool intel_dp_compute_link_config(struct intel_encoder *encoder, struct intel_crtc_state *pipe_config) { + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode; struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base); struct link_config_limits limits; @@ -1990,7 +2072,9 @@ intel_dp_compute_link_config(struct intel_encoder *encoder, limits.min_lane_count = 1; limits.max_lane_count = intel_dp_max_lane_count(intel_dp); - limits.min_bpp = 6 * 3; + limits.min_bpp = (INTEL_GEN(dev_priv) >= 10 && + drm_dp_sink_supports_dsc(intel_dp->dsc_dpcd)) ? + DP_DSC_MIN_SUPPORTED_BPC * 3 : 6 * 3; limits.max_bpp = intel_dp_compute_bpp(intel_dp, pipe_config); if (intel_dp_is_edp(intel_dp)) { @@ -2017,19 +2101,42 @@ intel_dp_compute_link_config(struct intel_encoder *encoder, * Optimize for slow and wide. This is the place to add alternative * optimization policy. */ - if (!intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits)) - return false; - - DRM_DEBUG_KMS("DP lane count %d clock %d bpp %d\n", - pipe_config->lane_count, pipe_config->port_clock, - pipe_config->pipe_bpp); - - DRM_DEBUG_KMS("DP link rate required %i available %i\n", - intel_dp_link_required(adjusted_mode->crtc_clock, - pipe_config->pipe_bpp), - intel_dp_max_data_rate(pipe_config->port_clock, - pipe_config->lane_count)); + if (!intel_dp_compute_link_config_wide(intel_dp, pipe_config, + &limits)) { + DRM_DEBUG_KMS("DP required Link rate %i does not fit available %i\n", + intel_dp_link_required(adjusted_mode->crtc_clock, + pipe_config->pipe_bpp), + intel_dp_max_data_rate(pipe_config->port_clock, + pipe_config->lane_count)); + + /* enable compression if the mode doesn't fit available BW */ + if (!intel_dp_dsc_compute_config(intel_dp, pipe_config, + &limits)) + return false; + } + + if (pipe_config->dsc_params.compression_enable) { + DRM_DEBUG_KMS("DP lane count %d clock %d Input bpp %d Compressed bpp %d\n", + pipe_config->lane_count, pipe_config->port_clock, + pipe_config->pipe_bpp, + pipe_config->dsc_params.compressed_bpp); + + DRM_DEBUG_KMS("DP link rate required %i available %i\n", + intel_dp_link_required(adjusted_mode->crtc_clock, + pipe_config->dsc_params.compressed_bpp), + intel_dp_max_data_rate(pipe_config->port_clock, + pipe_config->lane_count)); + } else { + DRM_DEBUG_KMS("DP lane count %d clock %d bpp %d\n", + pipe_config->lane_count, pipe_config->port_clock, + pipe_config->pipe_bpp); + DRM_DEBUG_KMS("DP link rate required %i available %i\n", + intel_dp_link_required(adjusted_mode->crtc_clock, + pipe_config->pipe_bpp), + intel_dp_max_data_rate(pipe_config->port_clock, + pipe_config->lane_count)); + } return true; } @@ -2108,7 +2215,9 @@ intel_dp_compute_config(struct intel_encoder *encoder, intel_conn_state->broadcast_rgb == INTEL_BROADCAST_RGB_LIMITED; } - intel_link_compute_m_n(pipe_config->pipe_bpp, pipe_config->lane_count, + intel_link_compute_m_n(pipe_config->pipe_bpp, + pipe_config->dsc_params.compressed_bpp, + pipe_config->lane_count, adjusted_mode->crtc_clock, pipe_config->port_clock, &pipe_config->dp_m_n, @@ -2117,7 +2226,7 @@ intel_dp_compute_config(struct intel_encoder *encoder, if (intel_connector->panel.downclock_mode != NULL && dev_priv->drrs.type == SEAMLESS_DRRS_SUPPORT) { pipe_config->has_drrs = true; - intel_link_compute_m_n(pipe_config->pipe_bpp, + intel_link_compute_m_n(pipe_config->pipe_bpp, 0, pipe_config->lane_count, intel_connector->panel.downclock_mode->clock, pipe_config->port_clock, diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c index 77920f1a3da1..1f2766e876f3 100644 --- a/drivers/gpu/drm/i915/intel_dp_mst.c +++ b/drivers/gpu/drm/i915/intel_dp_mst.c @@ -83,7 +83,7 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder, return false; } - intel_link_compute_m_n(bpp, lane_count, + intel_link_compute_m_n(bpp, 0, lane_count, adjusted_mode->crtc_clock, pipe_config->port_clock, &pipe_config->dp_m_n,