Message ID | 20200109150752.28098-1-mario.kleiner.de@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/i915/dp: Add current maximum eDP link rate to sink_rate array. | expand |
On Thu, Jan 09, 2020 at 04:07:52PM +0100, Mario Kleiner wrote: > If the current eDP link rate, as read from hw, provides a > higher bandwidth than the standard link rates, then add the > current link rate to the link_rates array for consideration > in future mode-sets. > > These initial current eDP link settings have been set up by > firmware during boot, so they should work on the eDP panel. > Therefore use them if the firmware thinks they are good and > they provide higher link bandwidth, e.g., to enable higher > resolutions / color depths. > > This fixes a problem found on the MacBookPro 2017 Retina panel: > > The panel reports 10 bpc color depth in its EDID, and the UEFI > firmware chooses link settings at boot which support enough > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, > so intel_dp_set_sink_rates() would cap at that. This restricts > achievable color depth to 8 bpc, not providing the full color > depth of the panel. With this commit, we can use firmware setting > and get the full 10 bpc advertised by the Retina panel. > > Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > --- > drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++++++++ > 1 file changed, 23 insertions(+) > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c > index 2f31d226c6eb..aa3e0b5108c6 100644 > --- a/drivers/gpu/drm/i915/display/intel_dp.c > +++ b/drivers/gpu/drm/i915/display/intel_dp.c > @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > { > struct drm_i915_private *dev_priv = > to_i915(dp_to_dig_port(intel_dp)->base.base.dev); > + int max_rate; > + u8 link_bw; > > /* this function is meant to be called only once */ > WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0); > @@ -4433,6 +4435,27 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > else > intel_dp_set_sink_rates(intel_dp); > > + /* > + * If the firmware programmed a rate higher than the standard sink rates > + * during boot, then add that rate as a valid sink rate, as fw knows > + * this is a good rate and we get extra bandwidth. > + * > + * Helps, e.g., on the Apple MacBookPro 2017 Retina panel, which is only > + * eDP 1.1, but supports the unusual rate of 324000 kHz at bootup, for > + * 10 bpc / 30 bit color depth. > + */ > + if (!intel_dp->use_rate_select && > + (drm_dp_dpcd_read(&intel_dp->aux, DP_LINK_BW_SET, &link_bw, 1) == 1) && > + (link_bw > 0) && (intel_dp->num_sink_rates < DP_MAX_SUPPORTED_RATES)) { > + max_rate = drm_dp_bw_code_to_link_rate(link_bw); > + if (max_rate > intel_dp->sink_rates[intel_dp->num_sink_rates - 1]) { > + intel_dp->sink_rates[intel_dp->num_sink_rates] = max_rate; > + intel_dp->num_sink_rates++; > + DRM_DEBUG_KMS("Adding max bandwidth eDP rate %d kHz.\n", > + max_rate); > + } Hmm. I guess we could do this. But plese put it into a separate function so we don't end up with that super ugly if condition. The debug message should probably be a bit more explicit. Eg. something like: "Firmware using non-standard link rate %d kHz. Including it in sink rates.\n" I'm also wondering if we shouldn't just add the link rate to the sink rates regradless of whether it's the highest rate or not... > + } > + > intel_dp_set_common_rates(intel_dp); > > /* Read the eDP DSC DPCD registers */ > -- > 2.24.0 > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Jan 09, 2020 at 05:26:57PM +0200, Ville Syrjälä wrote: > On Thu, Jan 09, 2020 at 04:07:52PM +0100, Mario Kleiner wrote: > > If the current eDP link rate, as read from hw, provides a > > higher bandwidth than the standard link rates, then add the > > current link rate to the link_rates array for consideration > > in future mode-sets. > > > > These initial current eDP link settings have been set up by > > firmware during boot, so they should work on the eDP panel. > > Therefore use them if the firmware thinks they are good and > > they provide higher link bandwidth, e.g., to enable higher > > resolutions / color depths. > > > > This fixes a problem found on the MacBookPro 2017 Retina panel: > > > > The panel reports 10 bpc color depth in its EDID, and the UEFI > > firmware chooses link settings at boot which support enough > > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, Does it actually or do we just ignore the fact that it reports 3.24Gbps? If it really reports 3.24 then we should be able to just add that to dp_rates[] in intel_dp_set_sink_rates() and be done with it. Although we'd likely want to skip 3.24 unless it really is reported as the max so as to not use that non-standard rate on other displays. So would require a bit fancier logic for that. > > so intel_dp_set_sink_rates() would cap at that. This restricts > > achievable color depth to 8 bpc, not providing the full color > > depth of the panel. With this commit, we can use firmware setting > > and get the full 10 bpc advertised by the Retina panel. > > > > Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > > --- > > drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++++++++ > > 1 file changed, 23 insertions(+) > > > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c > > index 2f31d226c6eb..aa3e0b5108c6 100644 > > --- a/drivers/gpu/drm/i915/display/intel_dp.c > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c > > @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > > { > > struct drm_i915_private *dev_priv = > > to_i915(dp_to_dig_port(intel_dp)->base.base.dev); > > + int max_rate; > > + u8 link_bw; > > > > /* this function is meant to be called only once */ > > WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0); > > @@ -4433,6 +4435,27 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > > else > > intel_dp_set_sink_rates(intel_dp); > > > > + /* > > + * If the firmware programmed a rate higher than the standard sink rates > > + * during boot, then add that rate as a valid sink rate, as fw knows > > + * this is a good rate and we get extra bandwidth. > > + * > > + * Helps, e.g., on the Apple MacBookPro 2017 Retina panel, which is only > > + * eDP 1.1, but supports the unusual rate of 324000 kHz at bootup, for > > + * 10 bpc / 30 bit color depth. > > + */ > > + if (!intel_dp->use_rate_select && > > + (drm_dp_dpcd_read(&intel_dp->aux, DP_LINK_BW_SET, &link_bw, 1) == 1) && > > + (link_bw > 0) && (intel_dp->num_sink_rates < DP_MAX_SUPPORTED_RATES)) { > > + max_rate = drm_dp_bw_code_to_link_rate(link_bw); > > + if (max_rate > intel_dp->sink_rates[intel_dp->num_sink_rates - 1]) { > > + intel_dp->sink_rates[intel_dp->num_sink_rates] = max_rate; > > + intel_dp->num_sink_rates++; > > + DRM_DEBUG_KMS("Adding max bandwidth eDP rate %d kHz.\n", > > + max_rate); > > + } > > Hmm. I guess we could do this. But plese put it into a separate > function so we don't end up with that super ugly if condition. > > The debug message should probably be a bit more explicit. Eg. > something like: > "Firmware using non-standard link rate %d kHz. Including it in sink rates.\n" > > I'm also wondering if we shouldn't just add the link rate to the sink > rates regradless of whether it's the highest rate or not... > > > + } > > + > > intel_dp_set_common_rates(intel_dp); > > > > /* Read the eDP DSC DPCD registers */ > > -- > > 2.24.0 > > > > _______________________________________________ > > dri-devel mailing list > > dri-devel@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > -- > Ville Syrjälä > Intel
On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner <mario.kleiner.de@gmail.com> wrote: > > If the current eDP link rate, as read from hw, provides a > higher bandwidth than the standard link rates, then add the > current link rate to the link_rates array for consideration > in future mode-sets. > > These initial current eDP link settings have been set up by > firmware during boot, so they should work on the eDP panel. > Therefore use them if the firmware thinks they are good and > they provide higher link bandwidth, e.g., to enable higher > resolutions / color depths. > > This fixes a problem found on the MacBookPro 2017 Retina panel: > > The panel reports 10 bpc color depth in its EDID, and the UEFI > firmware chooses link settings at boot which support enough > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, > so intel_dp_set_sink_rates() would cap at that. This restricts > achievable color depth to 8 bpc, not providing the full color > depth of the panel. With this commit, we can use firmware setting > and get the full 10 bpc advertised by the Retina panel. Would it make more sense to just add a quirk for this particular panel? Would there be cases where the link was programmed wrong and then we end up using that additional link speed as supported? Alex > > Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > --- > drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++++++++ > 1 file changed, 23 insertions(+) > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c > index 2f31d226c6eb..aa3e0b5108c6 100644 > --- a/drivers/gpu/drm/i915/display/intel_dp.c > +++ b/drivers/gpu/drm/i915/display/intel_dp.c > @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > { > struct drm_i915_private *dev_priv = > to_i915(dp_to_dig_port(intel_dp)->base.base.dev); > + int max_rate; > + u8 link_bw; > > /* this function is meant to be called only once */ > WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0); > @@ -4433,6 +4435,27 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > else > intel_dp_set_sink_rates(intel_dp); > > + /* > + * If the firmware programmed a rate higher than the standard sink rates > + * during boot, then add that rate as a valid sink rate, as fw knows > + * this is a good rate and we get extra bandwidth. > + * > + * Helps, e.g., on the Apple MacBookPro 2017 Retina panel, which is only > + * eDP 1.1, but supports the unusual rate of 324000 kHz at bootup, for > + * 10 bpc / 30 bit color depth. > + */ > + if (!intel_dp->use_rate_select && > + (drm_dp_dpcd_read(&intel_dp->aux, DP_LINK_BW_SET, &link_bw, 1) == 1) && > + (link_bw > 0) && (intel_dp->num_sink_rates < DP_MAX_SUPPORTED_RATES)) { > + max_rate = drm_dp_bw_code_to_link_rate(link_bw); > + if (max_rate > intel_dp->sink_rates[intel_dp->num_sink_rates - 1]) { > + intel_dp->sink_rates[intel_dp->num_sink_rates] = max_rate; > + intel_dp->num_sink_rates++; > + DRM_DEBUG_KMS("Adding max bandwidth eDP rate %d kHz.\n", > + max_rate); > + } > + } > + > intel_dp_set_common_rates(intel_dp); > > /* Read the eDP DSC DPCD registers */ > -- > 2.24.0 > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Jan 9, 2020 at 4:27 PM Ville Syrjälä <ville.syrjala@linux.intel.com> wrote: > On Thu, Jan 09, 2020 at 04:07:52PM +0100, Mario Kleiner wrote: > > If the current eDP link rate, as read from hw, provides a > > higher bandwidth than the standard link rates, then add the > > current link rate to the link_rates array for consideration > > in future mode-sets. > > > > These initial current eDP link settings have been set up by > > firmware during boot, so they should work on the eDP panel. > > Therefore use them if the firmware thinks they are good and > > they provide higher link bandwidth, e.g., to enable higher > > resolutions / color depths. > > > > This fixes a problem found on the MacBookPro 2017 Retina panel: > > > > The panel reports 10 bpc color depth in its EDID, and the UEFI > > firmware chooses link settings at boot which support enough > > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, > > so intel_dp_set_sink_rates() would cap at that. This restricts > > achievable color depth to 8 bpc, not providing the full color > > depth of the panel. With this commit, we can use firmware setting > > and get the full 10 bpc advertised by the Retina panel. > > > > Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > > --- > > drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++++++++ > > 1 file changed, 23 insertions(+) > > > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c > b/drivers/gpu/drm/i915/display/intel_dp.c > > index 2f31d226c6eb..aa3e0b5108c6 100644 > > --- a/drivers/gpu/drm/i915/display/intel_dp.c > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c > > @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > > { > > struct drm_i915_private *dev_priv = > > to_i915(dp_to_dig_port(intel_dp)->base.base.dev); > > + int max_rate; > > + u8 link_bw; > > > > /* this function is meant to be called only once */ > > WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0); > > @@ -4433,6 +4435,27 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > > else > > intel_dp_set_sink_rates(intel_dp); > > > > + /* > > + * If the firmware programmed a rate higher than the standard sink > rates > > + * during boot, then add that rate as a valid sink rate, as fw > knows > > + * this is a good rate and we get extra bandwidth. > > + * > > + * Helps, e.g., on the Apple MacBookPro 2017 Retina panel, which > is only > > + * eDP 1.1, but supports the unusual rate of 324000 kHz at bootup, > for > > + * 10 bpc / 30 bit color depth. > > + */ > > + if (!intel_dp->use_rate_select && > > + (drm_dp_dpcd_read(&intel_dp->aux, DP_LINK_BW_SET, &link_bw, 1) > == 1) && > > + (link_bw > 0) && (intel_dp->num_sink_rates < > DP_MAX_SUPPORTED_RATES)) { > > + max_rate = drm_dp_bw_code_to_link_rate(link_bw); > > + if (max_rate > > intel_dp->sink_rates[intel_dp->num_sink_rates - 1]) { > > + intel_dp->sink_rates[intel_dp->num_sink_rates] = > max_rate; > > + intel_dp->num_sink_rates++; > > + DRM_DEBUG_KMS("Adding max bandwidth eDP rate %d > kHz.\n", > > + max_rate); > > + } > > Hmm. I guess we could do this. But plese put it into a separate > function so we don't end up with that super ugly if condition. > > Ok. Does static void intel_edp_add_bootup_rate() good to you? Or intel_edp_add_fw_rate()? The debug message should probably be a bit more explicit. Eg. > something like: > "Firmware using non-standard link rate %d kHz. Including it in sink > rates.\n" > Ok. > I'm also wondering if we shouldn't just add the link rate to the sink > rates regradless of whether it's the highest rate or not... > > I tried to be conservative, and simple, but yes, one could add it anyway. Would need to preserve the order in the sink_rates[] array. Your choice, your're the expert :) > > + } > > + > > intel_dp_set_common_rates(intel_dp); > > > > /* Read the eDP DSC DPCD registers */ > > -- > > 2.24.0 > > > > _______________________________________________ > > dri-devel mailing list > > dri-devel@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > -- > Ville Syrjälä > Intel >
On Thu, Jan 9, 2020 at 4:38 PM Ville Syrjälä <ville.syrjala@linux.intel.com> wrote: > On Thu, Jan 09, 2020 at 05:26:57PM +0200, Ville Syrjälä wrote: > > On Thu, Jan 09, 2020 at 04:07:52PM +0100, Mario Kleiner wrote: > > > The panel reports 10 bpc color depth in its EDID, and the UEFI > > > firmware chooses link settings at boot which support enough > > > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > > > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, > > Does it actually or do we just ignore the fact that it reports 3.24Gbps? > > If it really reports 3.24 then we should be able to just add that to > dp_rates[] in intel_dp_set_sink_rates() and be done with it. > > Although we'd likely want to skip 3.24 unless it really is reported > as the max so as to not use that non-standard rate on other displays. > So would require a bit fancier logic for that. > > Was also my initial thought, but the DP_MAX_LINK_RATE reg reports 2.7 Gbps as maximum. -mario
On Thu, Jan 09, 2020 at 05:38:15PM +0200, Ville Syrjälä wrote: > On Thu, Jan 09, 2020 at 05:26:57PM +0200, Ville Syrjälä wrote: > > On Thu, Jan 09, 2020 at 04:07:52PM +0100, Mario Kleiner wrote: > > > If the current eDP link rate, as read from hw, provides a > > > higher bandwidth than the standard link rates, then add the > > > current link rate to the link_rates array for consideration > > > in future mode-sets. > > > > > > These initial current eDP link settings have been set up by > > > firmware during boot, so they should work on the eDP panel. > > > Therefore use them if the firmware thinks they are good and > > > they provide higher link bandwidth, e.g., to enable higher > > > resolutions / color depths. > > > > > > This fixes a problem found on the MacBookPro 2017 Retina panel: > > > > > > The panel reports 10 bpc color depth in its EDID, and the UEFI > > > firmware chooses link settings at boot which support enough > > > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > > > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, > > Does it actually or do we just ignore the fact that it reports 3.24Gbps? > > If it really reports 3.24 then we should be able to just add that to > dp_rates[] in intel_dp_set_sink_rates() and be done with it. > > Although we'd likely want to skip 3.24 unless it really is reported > as the max so as to not use that non-standard rate on other displays. > So would require a bit fancier logic for that. Maybe just something like this? --- a/drivers/gpu/drm/i915/display/intel_dp.c +++ b/drivers/gpu/drm/i915/display/intel_dp.c @@ -177,6 +177,12 @@ static void intel_dp_set_sink_rates(struct intel_dp *intel_dp) intel_dp->sink_rates[i] = dp_rates[i]; } + /* blah */ + if (max_rate > intel_dp->sink_rates[i - 1]) { + DRM_DEBUG_KMS(...); + intel_dp->sink_rates[i++] = max_rate; + } + intel_dp->num_sink_rates = i; > > > > so intel_dp_set_sink_rates() would cap at that. This restricts > > > achievable color depth to 8 bpc, not providing the full color > > > depth of the panel. With this commit, we can use firmware setting > > > and get the full 10 bpc advertised by the Retina panel. > > > > > > Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> > > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > > > --- > > > drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++++++++ > > > 1 file changed, 23 insertions(+) > > > > > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c > > > index 2f31d226c6eb..aa3e0b5108c6 100644 > > > --- a/drivers/gpu/drm/i915/display/intel_dp.c > > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c > > > @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > > > { > > > struct drm_i915_private *dev_priv = > > > to_i915(dp_to_dig_port(intel_dp)->base.base.dev); > > > + int max_rate; > > > + u8 link_bw; > > > > > > /* this function is meant to be called only once */ > > > WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0); > > > @@ -4433,6 +4435,27 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > > > else > > > intel_dp_set_sink_rates(intel_dp); > > > > > > + /* > > > + * If the firmware programmed a rate higher than the standard sink rates > > > + * during boot, then add that rate as a valid sink rate, as fw knows > > > + * this is a good rate and we get extra bandwidth. > > > + * > > > + * Helps, e.g., on the Apple MacBookPro 2017 Retina panel, which is only > > > + * eDP 1.1, but supports the unusual rate of 324000 kHz at bootup, for > > > + * 10 bpc / 30 bit color depth. > > > + */ > > > + if (!intel_dp->use_rate_select && > > > + (drm_dp_dpcd_read(&intel_dp->aux, DP_LINK_BW_SET, &link_bw, 1) == 1) && > > > + (link_bw > 0) && (intel_dp->num_sink_rates < DP_MAX_SUPPORTED_RATES)) { > > > + max_rate = drm_dp_bw_code_to_link_rate(link_bw); > > > + if (max_rate > intel_dp->sink_rates[intel_dp->num_sink_rates - 1]) { > > > + intel_dp->sink_rates[intel_dp->num_sink_rates] = max_rate; > > > + intel_dp->num_sink_rates++; > > > + DRM_DEBUG_KMS("Adding max bandwidth eDP rate %d kHz.\n", > > > + max_rate); > > > + } > > > > Hmm. I guess we could do this. But plese put it into a separate > > function so we don't end up with that super ugly if condition. > > > > The debug message should probably be a bit more explicit. Eg. > > something like: > > "Firmware using non-standard link rate %d kHz. Including it in sink rates.\n" > > > > I'm also wondering if we shouldn't just add the link rate to the sink > > rates regradless of whether it's the highest rate or not... > > > > > + } > > > + > > > intel_dp_set_common_rates(intel_dp); > > > > > > /* Read the eDP DSC DPCD registers */ > > > -- > > > 2.24.0 > > > > > > _______________________________________________ > > > dri-devel mailing list > > > dri-devel@lists.freedesktop.org > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > > -- > > Ville Syrjälä > > Intel > > -- > Ville Syrjälä > Intel > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Jan 9, 2020 at 4:40 PM Alex Deucher <alexdeucher@gmail.com> wrote: > On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner > <mario.kleiner.de@gmail.com> wrote: > > > > If the current eDP link rate, as read from hw, provides a > > higher bandwidth than the standard link rates, then add the > > current link rate to the link_rates array for consideration > > in future mode-sets. > > > > These initial current eDP link settings have been set up by > > firmware during boot, so they should work on the eDP panel. > > Therefore use them if the firmware thinks they are good and > > they provide higher link bandwidth, e.g., to enable higher > > resolutions / color depths. > > > > This fixes a problem found on the MacBookPro 2017 Retina panel: > > > > The panel reports 10 bpc color depth in its EDID, and the UEFI > > firmware chooses link settings at boot which support enough > > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, > > so intel_dp_set_sink_rates() would cap at that. This restricts > > achievable color depth to 8 bpc, not providing the full color > > depth of the panel. With this commit, we can use firmware setting > > and get the full 10 bpc advertised by the Retina panel. > > Would it make more sense to just add a quirk for this particular > panel? Would there be cases where the link was programmed wrong and > then we end up using that additional link speed as supported? > > Alex > > Not sure. This MBP 2017 is the only non-ancient laptop i now have. I'd assume many other Apple Retina panels would behave similar. The panels dpcd regs report DP 1.1 and eDP 1.3, so the flexible table with additional modes from eDP1.4+ does not exist. According to Wikipedia, eDP 1.4 was introduced in february 2013 and this is a mid 2017 machine, so Apple seems to be quite behind. Therefore i assume we'd need a lot of quirks over time. That said: 1. The logic in amdgpu's DC for the same purpose is a bit different than on the intel side. 2. DC allows overriding DP link settings, that's how i initially tested this, so one could do the "quirk" via something like that in a bootup script. So on AMD one could work around the lack of the patch and of quirks. 3. I spent a lot of time with a photo-meter, testing the quality of the 10 bit: It turns out that running the panel at 8 bit + AMD's spatial dithering that kicks in gives better results than running the panel in native 10 bit. Maybe the panel is not really a 10 bit one, but just pretends to be and then uses its own dithering to achieve 10 bit. So at least on AMD one is better off precision-wise with the 8 bit panel default with this specific panel. On Intel however, we don't do dithering for > 6 bpc panels atm., so using the panel at 10 bpc is the only way to get 10 bit display atm. Adn we don't use dithering on Intel at > 6 bpc panels atm., because there are some oddities in the way Intel hw dithers at higher bit depths - it also dithers pixel values where it shouldn't. That makes it impossible to get an identity passthrough of a 8 bpc framebuffer to the outputs, which kills all kind of special display equipment that needs that identity passthrough to work. -mario > > > Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > > --- > > drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++++++++ > > 1 file changed, 23 insertions(+) > > > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c > b/drivers/gpu/drm/i915/display/intel_dp.c > > index 2f31d226c6eb..aa3e0b5108c6 100644 > > --- a/drivers/gpu/drm/i915/display/intel_dp.c > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c > > @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > > { > > struct drm_i915_private *dev_priv = > > to_i915(dp_to_dig_port(intel_dp)->base.base.dev); > > + int max_rate; > > + u8 link_bw; > > > > /* this function is meant to be called only once */ > > WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0); > > @@ -4433,6 +4435,27 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) > > else > > intel_dp_set_sink_rates(intel_dp); > > > > + /* > > + * If the firmware programmed a rate higher than the standard > sink rates > > + * during boot, then add that rate as a valid sink rate, as fw > knows > > + * this is a good rate and we get extra bandwidth. > > + * > > + * Helps, e.g., on the Apple MacBookPro 2017 Retina panel, which > is only > > + * eDP 1.1, but supports the unusual rate of 324000 kHz at > bootup, for > > + * 10 bpc / 30 bit color depth. > > + */ > > + if (!intel_dp->use_rate_select && > > + (drm_dp_dpcd_read(&intel_dp->aux, DP_LINK_BW_SET, &link_bw, > 1) == 1) && > > + (link_bw > 0) && (intel_dp->num_sink_rates < > DP_MAX_SUPPORTED_RATES)) { > > + max_rate = drm_dp_bw_code_to_link_rate(link_bw); > > + if (max_rate > > intel_dp->sink_rates[intel_dp->num_sink_rates - 1]) { > > + intel_dp->sink_rates[intel_dp->num_sink_rates] = > max_rate; > > + intel_dp->num_sink_rates++; > > + DRM_DEBUG_KMS("Adding max bandwidth eDP rate %d > kHz.\n", > > + max_rate); > > + } > > + } > > + > > intel_dp_set_common_rates(intel_dp); > > > > /* Read the eDP DSC DPCD registers */ > > -- > > 2.24.0 > > > > _______________________________________________ > > dri-devel mailing list > > dri-devel@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/dri-devel >
On Thu, Jan 09, 2020 at 05:30:05PM +0100, Mario Kleiner wrote: > On Thu, Jan 9, 2020 at 4:38 PM Ville Syrjälä <ville.syrjala@linux.intel.com> > wrote: > > > On Thu, Jan 09, 2020 at 05:26:57PM +0200, Ville Syrjälä wrote: > > > On Thu, Jan 09, 2020 at 04:07:52PM +0100, Mario Kleiner wrote: > > > > The panel reports 10 bpc color depth in its EDID, and the UEFI > > > > firmware chooses link settings at boot which support enough > > > > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > > > > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, > > > > Does it actually or do we just ignore the fact that it reports 3.24Gbps? > > > > If it really reports 3.24 then we should be able to just add that to > > dp_rates[] in intel_dp_set_sink_rates() and be done with it. > > > > Although we'd likely want to skip 3.24 unless it really is reported > > as the max so as to not use that non-standard rate on other displays. > > So would require a bit fancier logic for that. > > > > > Was also my initial thought, but the DP_MAX_LINK_RATE reg reports 2.7 Gbps > as maximum. So dpcd[0x1] == 0xa ? What about the magic second version of DP_MAX_LINK_RATE at 0x2201 ? Hmm. I guess we should already be reading that via intel_dp_extended_receiver_capabilities().
On Thu, Jan 9, 2020 at 5:47 PM Ville Syrjälä <ville.syrjala@linux.intel.com> wrote: > On Thu, Jan 09, 2020 at 05:30:05PM +0100, Mario Kleiner wrote: > > On Thu, Jan 9, 2020 at 4:38 PM Ville Syrjälä < > ville.syrjala@linux.intel.com> > > wrote: > > > > > On Thu, Jan 09, 2020 at 05:26:57PM +0200, Ville Syrjälä wrote: > > > > On Thu, Jan 09, 2020 at 04:07:52PM +0100, Mario Kleiner wrote: > > > > > The panel reports 10 bpc color depth in its EDID, and the UEFI > > > > > firmware chooses link settings at boot which support enough > > > > > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > > > > > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, > > > > > > Does it actually or do we just ignore the fact that it reports > 3.24Gbps? > > > > > > If it really reports 3.24 then we should be able to just add that to > > > dp_rates[] in intel_dp_set_sink_rates() and be done with it. > > > > > > Although we'd likely want to skip 3.24 unless it really is reported > > > as the max so as to not use that non-standard rate on other displays. > > > So would require a bit fancier logic for that. > > > > > > > > Was also my initial thought, but the DP_MAX_LINK_RATE reg reports 2.7 > Gbps > > as maximum. > > So dpcd[0x1] == 0xa ? > > Yes. [*] > What about the magic second version of DP_MAX_LINK_RATE at 0x2201 ? > Hmm. I guess we should already be reading that via > intel_dp_extended_receiver_capabilities(). > Yes, you do. [*] Well, i have to recheck on the machine. I started this work on the AMD side and checked what AMD DC gave me, haven't rechecked stuff under i915 that i already knew from AMD. Comparing the implementations, there's some peculiar differences that may matter: intel_dp_extended_receiver_capabilities() is more "paranoid" than AMD DC's retrieve_link_cap() function in deciding if the extended receiver caps are valid. Intels implementation copies only the first 6 Bytes of extended receiver caps into the dpcd[] arrays, whereas AMD copies 16 Bytes. Not sure about the differences, but one of you may wanna check why this is, and if it matters somehow. Btw. your proposed /* blah */ if (max_rate > ...) wouldn't work if dpcd[0x1] == 0xa, which it likely is [*]. AMD DC identified it as DP 1.1, eDP 1.3, and these extended caps seem to be only part of DP 1.3+ if i understand the comments in intel_dp_extended_receiver_capabilities() correctly. -mario > > -- > Ville Syrjälä > Intel >
On Thu, Jan 09, 2020 at 06:57:14PM +0100, Mario Kleiner wrote: > On Thu, Jan 9, 2020 at 5:47 PM Ville Syrjälä <ville.syrjala@linux.intel.com> > wrote: > > > On Thu, Jan 09, 2020 at 05:30:05PM +0100, Mario Kleiner wrote: > > > On Thu, Jan 9, 2020 at 4:38 PM Ville Syrjälä < > > ville.syrjala@linux.intel.com> > > > wrote: > > > > > > > On Thu, Jan 09, 2020 at 05:26:57PM +0200, Ville Syrjälä wrote: > > > > > On Thu, Jan 09, 2020 at 04:07:52PM +0100, Mario Kleiner wrote: > > > > > > The panel reports 10 bpc color depth in its EDID, and the UEFI > > > > > > firmware chooses link settings at boot which support enough > > > > > > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the > > > > > > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, > > > > > > > > Does it actually or do we just ignore the fact that it reports > > 3.24Gbps? > > > > > > > > If it really reports 3.24 then we should be able to just add that to > > > > dp_rates[] in intel_dp_set_sink_rates() and be done with it. > > > > > > > > Although we'd likely want to skip 3.24 unless it really is reported > > > > as the max so as to not use that non-standard rate on other displays. > > > > So would require a bit fancier logic for that. > > > > > > > > > > > Was also my initial thought, but the DP_MAX_LINK_RATE reg reports 2.7 > > Gbps > > > as maximum. > > > > So dpcd[0x1] == 0xa ? > > > > > Yes. [*] > > > > What about the magic second version of DP_MAX_LINK_RATE at 0x2201 ? > > Hmm. I guess we should already be reading that via > > intel_dp_extended_receiver_capabilities(). > > > > Yes, you do. > > [*] Well, i have to recheck on the machine. I started this work on the AMD > side and checked what AMD DC gave me, haven't rechecked stuff under i915 > that i already knew from AMD. Comparing the implementations, there's some > peculiar differences that may matter: > > intel_dp_extended_receiver_capabilities() is more "paranoid" than AMD DC's > retrieve_link_cap() function in deciding if the extended receiver caps are > valid. Intels implementation copies only the first 6 Bytes of extended > receiver caps into the dpcd[] arrays, whereas AMD copies 16 Bytes. Not sure > about the differences, but one of you may wanna check why this is, and if > it matters somehow. > > Btw. your proposed > > /* blah */ > if (max_rate > ...) > > wouldn't work if dpcd[0x1] == 0xa, which it likely is [*]. AMD DC > identified it as DP 1.1, eDP 1.3, and these extended caps seem to be only > part of DP 1.3+ if i understand the comments in > intel_dp_extended_receiver_capabilities() correctly. Yeah, but you never know how creative they've been with the DPCD in such a propritary machine. A full DPCD dump from /dev/drm_dp_aux* would be nice. Can you file a bug an attach the DPCD dump there so we have a good reference on what we're talking about (also for future if/when someone eventually starts to wonder why we have such hacks in the code)?
On Thu, Jan 9, 2020 at 11:47 AM Mario Kleiner <mario.kleiner.de@gmail.com> wrote: > > On Thu, Jan 9, 2020 at 4:40 PM Alex Deucher <alexdeucher@gmail.com> wrote: >> >> On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner >> <mario.kleiner.de@gmail.com> wrote: >> > >> > If the current eDP link rate, as read from hw, provides a >> > higher bandwidth than the standard link rates, then add the >> > current link rate to the link_rates array for consideration >> > in future mode-sets. >> > >> > These initial current eDP link settings have been set up by >> > firmware during boot, so they should work on the eDP panel. >> > Therefore use them if the firmware thinks they are good and >> > they provide higher link bandwidth, e.g., to enable higher >> > resolutions / color depths. >> > >> > This fixes a problem found on the MacBookPro 2017 Retina panel: >> > >> > The panel reports 10 bpc color depth in its EDID, and the UEFI >> > firmware chooses link settings at boot which support enough >> > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the >> > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, >> > so intel_dp_set_sink_rates() would cap at that. This restricts >> > achievable color depth to 8 bpc, not providing the full color >> > depth of the panel. With this commit, we can use firmware setting >> > and get the full 10 bpc advertised by the Retina panel. >> >> Would it make more sense to just add a quirk for this particular >> panel? Would there be cases where the link was programmed wrong and >> then we end up using that additional link speed as supported? >> >> Alex >> > > Not sure. This MBP 2017 is the only non-ancient laptop i now have. I'd assume many other Apple Retina panels would behave similar. The panels dpcd regs report DP 1.1 and eDP 1.3, so the flexible table with additional modes from eDP1.4+ does not exist. According to Wikipedia, eDP 1.4 was introduced in february 2013 and this is a mid 2017 machine, so Apple seems to be quite behind. Therefore i assume we'd need a lot of quirks over time. > > That said: > > 1. The logic in amdgpu's DC for the same purpose is a bit different than on the intel side. > > 2. DC allows overriding DP link settings, that's how i initially tested this, so one could do the "quirk" via something like that in a bootup script. So on AMD one could work around the lack of the patch and of quirks. > > 3. I spent a lot of time with a photo-meter, testing the quality of the 10 bit: It turns out that running the panel at 8 bit + AMD's spatial dithering that kicks in gives better results than running the panel in native 10 bit. Maybe the panel is not really a 10 bit one, but just pretends to be and then uses its own dithering to achieve 10 bit. So at least on AMD one is better off precision-wise with the 8 bit panel default with this specific panel. > > On Intel however, we don't do dithering for > 6 bpc panels atm., so using the panel at 10 bpc is the only way to get 10 bit display atm. Adn we don't use dithering on Intel at > 6 bpc panels atm., because there are some oddities in the way Intel hw dithers at higher bit depths - it also dithers pixel values where it shouldn't. That makes it impossible to get an identity passthrough of a 8 bpc framebuffer to the outputs, which kills all kind of special display equipment that needs that identity passthrough to work. > As Harry mentioned in the other thread, won't this only work if the display was brought up by the vbios? In the suspend/resume case, won't we just fall back to 2.7Gbps? Alex > -mario > >> > >> > Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> >> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> >> > --- >> > drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++++++++ >> > 1 file changed, 23 insertions(+) >> > >> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c >> > index 2f31d226c6eb..aa3e0b5108c6 100644 >> > --- a/drivers/gpu/drm/i915/display/intel_dp.c >> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c >> > @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) >> > { >> > struct drm_i915_private *dev_priv = >> > to_i915(dp_to_dig_port(intel_dp)->base.base.dev); >> > + int max_rate; >> > + u8 link_bw; >> > >> > /* this function is meant to be called only once */ >> > WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0); >> > @@ -4433,6 +4435,27 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) >> > else >> > intel_dp_set_sink_rates(intel_dp); >> > >> > + /* >> > + * If the firmware programmed a rate higher than the standard sink rates >> > + * during boot, then add that rate as a valid sink rate, as fw knows >> > + * this is a good rate and we get extra bandwidth. >> > + * >> > + * Helps, e.g., on the Apple MacBookPro 2017 Retina panel, which is only >> > + * eDP 1.1, but supports the unusual rate of 324000 kHz at bootup, for >> > + * 10 bpc / 30 bit color depth. >> > + */ >> > + if (!intel_dp->use_rate_select && >> > + (drm_dp_dpcd_read(&intel_dp->aux, DP_LINK_BW_SET, &link_bw, 1) == 1) && >> > + (link_bw > 0) && (intel_dp->num_sink_rates < DP_MAX_SUPPORTED_RATES)) { >> > + max_rate = drm_dp_bw_code_to_link_rate(link_bw); >> > + if (max_rate > intel_dp->sink_rates[intel_dp->num_sink_rates - 1]) { >> > + intel_dp->sink_rates[intel_dp->num_sink_rates] = max_rate; >> > + intel_dp->num_sink_rates++; >> > + DRM_DEBUG_KMS("Adding max bandwidth eDP rate %d kHz.\n", >> > + max_rate); >> > + } >> > + } >> > + >> > intel_dp_set_common_rates(intel_dp); >> > >> > /* Read the eDP DSC DPCD registers */ >> > -- >> > 2.24.0 >> > >> > _______________________________________________ >> > dri-devel mailing list >> > dri-devel@lists.freedesktop.org >> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Jan 9, 2020 at 7:24 PM Ville Syrjälä <ville.syrjala@linux.intel.com> wrote: > On Thu, Jan 09, 2020 at 06:57:14PM +0100, Mario Kleiner wrote: > > On Thu, Jan 9, 2020 at 5:47 PM Ville Syrjälä < > ville.syrjala@linux.intel.com> > > wrote: > > > > > On Thu, Jan 09, 2020 at 05:30:05PM +0100, Mario Kleiner wrote: > > > > On Thu, Jan 9, 2020 at 4:38 PM Ville Syrjälä < > > > ville.syrjala@linux.intel.com> > > > > wrote: > > > > > > wouldn't work if dpcd[0x1] == 0xa, which it likely is [*]. AMD DC > > identified it as DP 1.1, eDP 1.3, and these extended caps seem to be only > > part of DP 1.3+ if i understand the comments in > > intel_dp_extended_receiver_capabilities() correctly. > > Ok, looking at previous debug output logs shows that those extended caps are not present on the systems, ie. that extended caps bit is not set. So dpcd[0x1] == 0xa. > Yeah, but you never know how creative they've been with the DPCD in > such a propritary machine. A full DPCD dump from /dev/drm_dp_aux* would > be nice. Can you file a bug an attach the DPCD dump there so we have a > good reference on what we're talking about (also for future if/when > someone eventually starts to wonder why we have such hacks in the > code)? > > True, it's Apple which likes to "Think different..." :/ Will do. But is there a proper/better way to do the /dev/drm_dp_aux0 dump? I used cat /dev/drm_dp_aux0 > dump, and that hangs, but if i interrupt it after a few seconds, i get a dump file of 512k size, which seems excessive? On AMD DC atm., in case that matters. However, the file shows DPCD_REV 1.1, maximum 0xa and no extended caps ( DP_TRAINING_AUX_RD_INTERVAL <https://elixir.bootlin.com/linux/v5.5-rc5/ident/DP_TRAINING_AUX_RD_INTERVAL> aka [0xe] == 0x00). -mario
On Thu, Jan 9, 2020 at 8:49 PM Alex Deucher <alexdeucher@gmail.com> wrote: > On Thu, Jan 9, 2020 at 11:47 AM Mario Kleiner > <mario.kleiner.de@gmail.com> wrote: > > > > On Thu, Jan 9, 2020 at 4:40 PM Alex Deucher <alexdeucher@gmail.com> > wrote: > >> > >> On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner > >> <mario.kleiner.de@gmail.com> wrote: > >> > > As Harry mentioned in the other thread, won't this only work if the > display was brought up by the vbios? In the suspend/resume case, > won't we just fall back to 2.7Gbps? > > Alex > > Adding Harry to cc... The code is only executed for eDP. On the Intel side, it seems that intel_edp_init_dpcd() gets only called during driver load / modesetting init, so not on resume. On the AMD DC side, dc_link_detect_helper() has this early no-op return at the beginning: if ((link <https://elixir.bootlin.com/linux/v5.5-rc5/ident/link>->connector_signal == SIGNAL_TYPE_LVDS <https://elixir.bootlin.com/linux/v5.5-rc5/ident/SIGNAL_TYPE_LVDS> || link <https://elixir.bootlin.com/linux/v5.5-rc5/ident/link>->connector_signal == SIGNAL_TYPE_EDP <https://elixir.bootlin.com/linux/v5.5-rc5/ident/SIGNAL_TYPE_EDP>) && link <https://elixir.bootlin.com/linux/v5.5-rc5/ident/link>->local_sink) return <https://elixir.bootlin.com/linux/v5.5-rc5/ident/return> true <https://elixir.bootlin.com/linux/v5.5-rc5/ident/true>; So i guess if link->local_sink doesn't get NULL'ed during a suspend/resume cycle, then we never reach the setup code that would overwrite with non vbios settings? Sounds reasonable to me, given that eDP panels are usually fixed internal panels, nothing that gets hot(un-)plugged? I can't test, because suspend/resume with the Polaris gpu on the MBP 2017 is totally broken atm., just as vgaswitcheroo can't do its job. Looks like powering down the gpu works, but powering up doesn't. And also modesetting at vgaswitcheroo switch time is no-go, because the DDC/AUX lines apparently can't be switched on that Apple gmux, and handover of that data seems to be not implemented in current vgaswitcheroo. At the moment switching between AMD only or Intel+AMD Prime setup is quite a pita... -mario
On 2020-01-09 4:04 p.m., Mario Kleiner wrote: > On Thu, Jan 9, 2020 at 8:49 PM Alex Deucher <alexdeucher@gmail.com > <mailto:alexdeucher@gmail.com>> wrote: > > On Thu, Jan 9, 2020 at 11:47 AM Mario Kleiner > <mario.kleiner.de@gmail.com <mailto:mario.kleiner.de@gmail.com>> > wrote: > > > > On Thu, Jan 9, 2020 at 4:40 PM Alex Deucher > <alexdeucher@gmail.com <mailto:alexdeucher@gmail.com>> wrote: > >> > >> On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner > >> <mario.kleiner.de@gmail.com > <mailto:mario.kleiner.de@gmail.com>> wrote: > >> > > As Harry mentioned in the other thread, won't this only work if the > display was brought up by the vbios? In the suspend/resume case, > won't we just fall back to 2.7Gbps? > > Alex > > > Adding Harry to cc... > > The code is only executed for eDP. On the Intel side, it seems that > intel_edp_init_dpcd() gets only called during driver load / > modesetting init, so not on resume. > > On the AMD DC side, dc_link_detect_helper() has this early no-op > return at the beginning: > > if ((link->connector_signal == SIGNAL_TYPE_LVDS || > link->connector_signal == SIGNAL_TYPE_EDP) && > link->local_sink) > return true; > > So i guess if link->local_sink doesn't get NULL'ed during a > suspend/resume cycle, then we never reach the setup code that would > overwrite with non vbios settings? > > Sounds reasonable to me, given that eDP panels are usually fixed > internal panels, nothing that gets hot(un-)plugged? > > I can't test, because suspend/resume with the Polaris gpu on the MBP > 2017 is totally broken atm., just as vgaswitcheroo can't do its job. > Looks like powering down the gpu works, but powering up doesn't. And > also modesetting at vgaswitcheroo switch time is no-go, because the > DDC/AUX lines apparently can't be switched on that Apple gmux, and > handover of that data seems to be not implemented in current > vgaswitcheroo. At the moment switching between AMD only or Intel+AMD > Prime setup is quite a pita... > I haven't followed the entire discussion on the i915 thread but for the amdgpu dc patch I would prefer a DPCD quirk to override the reported link settings with the correct link rate. Harry > -mario > >
On Thu, Jan 09, 2020 at 09:19:07PM +0100, Mario Kleiner wrote: > On Thu, Jan 9, 2020 at 7:24 PM Ville Syrjälä <ville.syrjala@linux.intel.com> > wrote: > > > On Thu, Jan 09, 2020 at 06:57:14PM +0100, Mario Kleiner wrote: > > > On Thu, Jan 9, 2020 at 5:47 PM Ville Syrjälä < > > ville.syrjala@linux.intel.com> > > > wrote: > > > > > > > On Thu, Jan 09, 2020 at 05:30:05PM +0100, Mario Kleiner wrote: > > > > > On Thu, Jan 9, 2020 at 4:38 PM Ville Syrjälä < > > > > ville.syrjala@linux.intel.com> > > > > > wrote: > > > > > > > > > > wouldn't work if dpcd[0x1] == 0xa, which it likely is [*]. AMD DC > > > identified it as DP 1.1, eDP 1.3, and these extended caps seem to be only > > > part of DP 1.3+ if i understand the comments in > > > intel_dp_extended_receiver_capabilities() correctly. > > > > > Ok, looking at previous debug output logs shows that those extended caps > are not present on the systems, ie. that extended caps bit is not set. So > dpcd[0x1] == 0xa. > > > > Yeah, but you never know how creative they've been with the DPCD in > > such a propritary machine. A full DPCD dump from /dev/drm_dp_aux* would > > be nice. Can you file a bug an attach the DPCD dump there so we have a > > good reference on what we're talking about (also for future if/when > > someone eventually starts to wonder why we have such hacks in the > > code)? > > > > > True, it's Apple which likes to "Think different..." :/ > > Will do. But is there a proper/better way to do the /dev/drm_dp_aux0 dump? > I used cat /dev/drm_dp_aux0 > dump, and that hangs, but if i interrupt it > after a few seconds, i get a dump file of 512k size, which seems excessive? > On AMD DC atm., in case that matters. It can take a while to dump the whole thing. If there are errors in some parts (against the spec but some devices simply don't care about the spec) you may need to use ddrescue/etc. to dump everything that can be dumped.
On Fri, Jan 10, 2020 at 2:32 PM Ville Syrjälä <ville.syrjala@linux.intel.com> wrote: > On Thu, Jan 09, 2020 at 09:19:07PM +0100, Mario Kleiner wrote: > > On Thu, Jan 9, 2020 at 7:24 PM Ville Syrjälä < > ville.syrjala@linux.intel.com> > > wrote: > > > > > On Thu, Jan 09, 2020 at 06:57:14PM +0100, Mario Kleiner wrote: > > > > On Thu, Jan 9, 2020 at 5:47 PM Ville Syrjälä < > > > ville.syrjala@linux.intel.com> > > > > wrote: > > > > > > > > > On Thu, Jan 09, 2020 at 05:30:05PM +0100, Mario Kleiner wrote: > > > > > > On Thu, Jan 9, 2020 at 4:38 PM Ville Syrjälä < > > > > > ville.syrjala@linux.intel.com> > > > > > > wrote: > > > > > > > > > > > > > > wouldn't work if dpcd[0x1] == 0xa, which it likely is [*]. AMD DC > > > > identified it as DP 1.1, eDP 1.3, and these extended caps seem to be > only > > > > part of DP 1.3+ if i understand the comments in > > > > intel_dp_extended_receiver_capabilities() correctly. > > > > > > > > Ok, looking at previous debug output logs shows that those extended caps > > are not present on the systems, ie. that extended caps bit is not set. So > > dpcd[0x1] == 0xa. > > > > > > > Yeah, but you never know how creative they've been with the DPCD in > > > such a propritary machine. A full DPCD dump from /dev/drm_dp_aux* would > > > be nice. Can you file a bug an attach the DPCD dump there so we have a > > > good reference on what we're talking about (also for future if/when > > > someone eventually starts to wonder why we have such hacks in the > > > code)? > > > > > > > > True, it's Apple which likes to "Think different..." :/ > > > > Will do. But is there a proper/better way to do the /dev/drm_dp_aux0 > dump? > > I used cat /dev/drm_dp_aux0 > dump, and that hangs, but if i interrupt it > > after a few seconds, i get a dump file of 512k size, which seems > excessive? > > On AMD DC atm., in case that matters. > > It can take a while to dump the whole thing. If there are errors in some > parts (against the spec but some devices simply don't care about the > spec) you may need to use ddrescue/etc. to dump everything that can be > dumped. > > Ok, it is Mozilla bug 206157: https://bugzilla.kernel.org/show_bug.cgi?id=206157 I attached the first ~ 5000 Bytes of DPCD dump, as there is a 5k file size limit. The total dump is 512 kB, mostly zeros. -mario
On Thu, Jan 9, 2020 at 10:26 PM Harry Wentland <hwentlan@amd.com> wrote: > > > On 2020-01-09 4:04 p.m., Mario Kleiner wrote: > > On Thu, Jan 9, 2020 at 8:49 PM Alex Deucher <alexdeucher@gmail.com> wrote: > >> On Thu, Jan 9, 2020 at 11:47 AM Mario Kleiner >> <mario.kleiner.de@gmail.com> wrote: >> > >> > On Thu, Jan 9, 2020 at 4:40 PM Alex Deucher <alexdeucher@gmail.com> >> wrote: >> >> >> >> On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner >> >> <mario.kleiner.de@gmail.com> wrote: >> >> > >> As Harry mentioned in the other thread, won't this only work if the >> display was brought up by the vbios? In the suspend/resume case, >> won't we just fall back to 2.7Gbps? >> >> Alex >> >> > Adding Harry to cc... > > The code is only executed for eDP. On the Intel side, it seems that > intel_edp_init_dpcd() gets only called during driver load / modesetting > init, so not on resume. > > On the AMD DC side, dc_link_detect_helper() has this early no-op return at > the beginning: > > if ((link->connector_signal == SIGNAL_TYPE_LVDS || > link->connector_signal == SIGNAL_TYPE_EDP) && > link->local_sink) > return true; > > > So i guess if link->local_sink doesn't get NULL'ed during a suspend/resume > cycle, then we never reach the setup code that would overwrite with non > vbios settings? > > Sounds reasonable to me, given that eDP panels are usually fixed internal > panels, nothing that gets hot(un-)plugged? > > I can't test, because suspend/resume with the Polaris gpu on the MBP 2017 > is totally broken atm., just as vgaswitcheroo can't do its job. Looks like > powering down the gpu works, but powering up doesn't. And also modesetting > at vgaswitcheroo switch time is no-go, because the DDC/AUX lines apparently > can't be switched on that Apple gmux, and handover of that data seems to be > not implemented in current vgaswitcheroo. At the moment switching between > AMD only or Intel+AMD Prime setup is quite a pita... > > > I haven't followed the entire discussion on the i915 thread but for the > amdgpu dc patch I would prefer a DPCD quirk to override the reported link > settings with the correct link rate. > > Harry > > Ok, as you wish. How do i do that? Is there already some DP related official mechanism, or do i just add some if-statement to detect_edp_sink_caps <https://elixir.bootlin.com/linux/v5.5-rc5/ident/detect_edp_sink_caps>() that matches on a new EDID quirk to be defined for that panel in drm_edid etc., and then if (edit quirk for that panel) dpcd[DP_MAX_LINK_RATE <https://elixir.bootlin.com/linux/v5.5-rc5/ident/DP_MAX_LINK_RATE>] = 0xc; The other question would be if we should do it for this panel on AMD DC at all? I see my original patch more as something to fix other odd (Apple?) panels, than for this specific one. As mentioned above, photometer testing on AMD DC with a Polaris on the MBP 2017 suggests that the deault 2.7 Gbps 8 bit mode + AMD's spatial dithering provides higher quality results for >= 10 bpc framebuffers than actually running the panel at 10 bit without dithering. As a little side-note, for squeezing out more precision than the 10 bpc framebuffers we officially have in Mesa/OpenGL, my software Psychtoolbox has some special hacks, playing funny tricks with resizing X-Screens, applying bit-twiddling shaders to images and MMIO programming the gpu "behind the back" of the driver, to get the gpu into RGBA16161616 linear scanout mode. That gives up to 12 bpc precision on that panel according to photometer measurements. While AMD's dithering with the panel in 8 bit + 4 bit spatial dithering gives pretty good results, panel at 10 bit + 2 bit spatial dithering has some artifacts. And even at a normal 10 bit framebuffer, the 8 bit panel + 2 bit dithering seems to give better results than 10 bit panel mode. -mario
On Thu, Jan 09, 2020 at 04:26:19PM -0500, Harry Wentland wrote: > > > On 2020-01-09 4:04 p.m., Mario Kleiner wrote: > > On Thu, Jan 9, 2020 at 8:49 PM Alex Deucher <alexdeucher@gmail.com > > <mailto:alexdeucher@gmail.com>> wrote: > > > > On Thu, Jan 9, 2020 at 11:47 AM Mario Kleiner > > <mario.kleiner.de@gmail.com <mailto:mario.kleiner.de@gmail.com>> > > wrote: > > > > > > On Thu, Jan 9, 2020 at 4:40 PM Alex Deucher > > <alexdeucher@gmail.com <mailto:alexdeucher@gmail.com>> wrote: > > >> > > >> On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner > > >> <mario.kleiner.de@gmail.com > > <mailto:mario.kleiner.de@gmail.com>> wrote: > > >> > > > As Harry mentioned in the other thread, won't this only work if the > > display was brought up by the vbios? In the suspend/resume case, > > won't we just fall back to 2.7Gbps? > > > > Alex > > > > > > Adding Harry to cc... > > > > The code is only executed for eDP. On the Intel side, it seems that > > intel_edp_init_dpcd() gets only called during driver load / > > modesetting init, so not on resume. > > > > On the AMD DC side, dc_link_detect_helper() has this early no-op > > return at the beginning: > > > > if ((link->connector_signal == SIGNAL_TYPE_LVDS || > > link->connector_signal == SIGNAL_TYPE_EDP) && > > link->local_sink) > > return true; > > > > So i guess if link->local_sink doesn't get NULL'ed during a > > suspend/resume cycle, then we never reach the setup code that would > > overwrite with non vbios settings? > > > > Sounds reasonable to me, given that eDP panels are usually fixed > > internal panels, nothing that gets hot(un-)plugged? > > > > I can't test, because suspend/resume with the Polaris gpu on the MBP > > 2017 is totally broken atm., just as vgaswitcheroo can't do its job. > > Looks like powering down the gpu works, but powering up doesn't. And > > also modesetting at vgaswitcheroo switch time is no-go, because the > > DDC/AUX lines apparently can't be switched on that Apple gmux, and > > handover of that data seems to be not implemented in current > > vgaswitcheroo. At the moment switching between AMD only or Intel+AMD > > Prime setup is quite a pita... > > > > I haven't followed the entire discussion on the i915 thread but for the > amdgpu dc patch I would prefer a DPCD quirk to override the reported > link settings with the correct link rate. We could consider adding a standard function for reading the receiver caps and applying the quirk there. I have a feeling that putting it into drm_dp_dpcd_read() would be a bit too low level since it would prevent reading the non-quirked raw data easily.
On Fri, 10 Jan 2020, Ville Syrjälä <ville.syrjala@linux.intel.com> wrote: > On Thu, Jan 09, 2020 at 04:26:19PM -0500, Harry Wentland wrote: >> >> >> On 2020-01-09 4:04 p.m., Mario Kleiner wrote: >> > On Thu, Jan 9, 2020 at 8:49 PM Alex Deucher <alexdeucher@gmail.com >> > <mailto:alexdeucher@gmail.com>> wrote: >> > >> > On Thu, Jan 9, 2020 at 11:47 AM Mario Kleiner >> > <mario.kleiner.de@gmail.com <mailto:mario.kleiner.de@gmail.com>> >> > wrote: >> > > >> > > On Thu, Jan 9, 2020 at 4:40 PM Alex Deucher >> > <alexdeucher@gmail.com <mailto:alexdeucher@gmail.com>> wrote: >> > >> >> > >> On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner >> > >> <mario.kleiner.de@gmail.com >> > <mailto:mario.kleiner.de@gmail.com>> wrote: >> > >> > >> > As Harry mentioned in the other thread, won't this only work if the >> > display was brought up by the vbios? In the suspend/resume case, >> > won't we just fall back to 2.7Gbps? >> > >> > Alex >> > >> > >> > Adding Harry to cc... >> > >> > The code is only executed for eDP. On the Intel side, it seems that >> > intel_edp_init_dpcd() gets only called during driver load / >> > modesetting init, so not on resume. >> > >> > On the AMD DC side, dc_link_detect_helper() has this early no-op >> > return at the beginning: >> > >> > if ((link->connector_signal == SIGNAL_TYPE_LVDS || >> > link->connector_signal == SIGNAL_TYPE_EDP) && >> > link->local_sink) >> > return true; >> > >> > So i guess if link->local_sink doesn't get NULL'ed during a >> > suspend/resume cycle, then we never reach the setup code that would >> > overwrite with non vbios settings? >> > >> > Sounds reasonable to me, given that eDP panels are usually fixed >> > internal panels, nothing that gets hot(un-)plugged? >> > >> > I can't test, because suspend/resume with the Polaris gpu on the MBP >> > 2017 is totally broken atm., just as vgaswitcheroo can't do its job. >> > Looks like powering down the gpu works, but powering up doesn't. And >> > also modesetting at vgaswitcheroo switch time is no-go, because the >> > DDC/AUX lines apparently can't be switched on that Apple gmux, and >> > handover of that data seems to be not implemented in current >> > vgaswitcheroo. At the moment switching between AMD only or Intel+AMD >> > Prime setup is quite a pita... >> > >> >> I haven't followed the entire discussion on the i915 thread but for the >> amdgpu dc patch I would prefer a DPCD quirk to override the reported >> link settings with the correct link rate. > > We could consider adding a standard function for reading the receiver > caps and applying the quirk there. I have a feeling that putting it > into drm_dp_dpcd_read() would be a bit too low level since it would > prevent reading the non-quirked raw data easily. Everything about this panel is ugly. The panel does not claim to support extended receiver caps. (I have not seen whether there is non-zero data at 0x2200. Mario, please provide a dump of that DPCD region.) The panel does use DPCD_DISPLAY_CONTROL_CAPABLE and reports eDP 1.3 in EDP_DPCD_REV. eDP 1.3 says only four values are supported in LINK_BW_SET (0x06, 0x0a, 0x14, and 0x1e). The same for MAX_LINK_RATE for all DP, and even in the extended receiver cap. You could perhaps make the case for the interpretation in commit 57a1b0893782 ("drm: Make the bw/link rate calculations more forgiving") that in eDP 1.4+ you can use arbitrary values in LINK_BW_SET. But I think that's a stretch, really. And anyway the panel reports eDP 1.3. The panel is consistent in that it does not claim to support link rate selection nor does it have anything in SUPPORTED_LINK_RATES which are eDP 1.4+ features. However, the panel reports 0x0a as the max link rate in MAX_LINK_RATE, which exceeds the value 0x0c set in LINK_BW_SET by the firmware. Bottom line is, *if* we're going to support this proprietary crap of a panel, it *must* be an isolated quirk. I certainly won't take a patch generalizing this to any panel out there. But you're going to have to be pretty clever to isolate this crap. I'm not sure if quirking a homebrew extended receiver cap is going to be enough. BR, Jani.
On Wed, Jan 15, 2020 at 02:34:02PM +0200, Jani Nikula wrote: > On Fri, 10 Jan 2020, Ville Syrjälä <ville.syrjala@linux.intel.com> wrote: > > On Thu, Jan 09, 2020 at 04:26:19PM -0500, Harry Wentland wrote: > >> > >> > >> On 2020-01-09 4:04 p.m., Mario Kleiner wrote: > >> > On Thu, Jan 9, 2020 at 8:49 PM Alex Deucher <alexdeucher@gmail.com > >> > <mailto:alexdeucher@gmail.com>> wrote: > >> > > >> > On Thu, Jan 9, 2020 at 11:47 AM Mario Kleiner > >> > <mario.kleiner.de@gmail.com <mailto:mario.kleiner.de@gmail.com>> > >> > wrote: > >> > > > >> > > On Thu, Jan 9, 2020 at 4:40 PM Alex Deucher > >> > <alexdeucher@gmail.com <mailto:alexdeucher@gmail.com>> wrote: > >> > >> > >> > >> On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner > >> > >> <mario.kleiner.de@gmail.com > >> > <mailto:mario.kleiner.de@gmail.com>> wrote: > >> > >> > > >> > As Harry mentioned in the other thread, won't this only work if the > >> > display was brought up by the vbios? In the suspend/resume case, > >> > won't we just fall back to 2.7Gbps? > >> > > >> > Alex > >> > > >> > > >> > Adding Harry to cc... > >> > > >> > The code is only executed for eDP. On the Intel side, it seems that > >> > intel_edp_init_dpcd() gets only called during driver load / > >> > modesetting init, so not on resume. > >> > > >> > On the AMD DC side, dc_link_detect_helper() has this early no-op > >> > return at the beginning: > >> > > >> > if ((link->connector_signal == SIGNAL_TYPE_LVDS || > >> > link->connector_signal == SIGNAL_TYPE_EDP) && > >> > link->local_sink) > >> > return true; > >> > > >> > So i guess if link->local_sink doesn't get NULL'ed during a > >> > suspend/resume cycle, then we never reach the setup code that would > >> > overwrite with non vbios settings? > >> > > >> > Sounds reasonable to me, given that eDP panels are usually fixed > >> > internal panels, nothing that gets hot(un-)plugged? > >> > > >> > I can't test, because suspend/resume with the Polaris gpu on the MBP > >> > 2017 is totally broken atm., just as vgaswitcheroo can't do its job. > >> > Looks like powering down the gpu works, but powering up doesn't. And > >> > also modesetting at vgaswitcheroo switch time is no-go, because the > >> > DDC/AUX lines apparently can't be switched on that Apple gmux, and > >> > handover of that data seems to be not implemented in current > >> > vgaswitcheroo. At the moment switching between AMD only or Intel+AMD > >> > Prime setup is quite a pita... > >> > > >> > >> I haven't followed the entire discussion on the i915 thread but for the > >> amdgpu dc patch I would prefer a DPCD quirk to override the reported > >> link settings with the correct link rate. > > > > We could consider adding a standard function for reading the receiver > > caps and applying the quirk there. I have a feeling that putting it > > into drm_dp_dpcd_read() would be a bit too low level since it would > > prevent reading the non-quirked raw data easily. > > Everything about this panel is ugly. > > The panel does not claim to support extended receiver caps. (I have not > seen whether there is non-zero data at 0x2200. Mario, please provide a > dump of that DPCD region.) > > The panel does use DPCD_DISPLAY_CONTROL_CAPABLE and reports eDP 1.3 in > EDP_DPCD_REV. > > eDP 1.3 says only four values are supported in LINK_BW_SET (0x06, 0x0a, > 0x14, and 0x1e). The same for MAX_LINK_RATE for all DP, and even in the > extended receiver cap. > > You could perhaps make the case for the interpretation in commit > 57a1b0893782 ("drm: Make the bw/link rate calculations more forgiving") > that in eDP 1.4+ you can use arbitrary values in LINK_BW_SET. But I > think that's a stretch, really. And anyway the panel reports eDP 1.3. > > The panel is consistent in that it does not claim to support link rate > selection nor does it have anything in SUPPORTED_LINK_RATES which are > eDP 1.4+ features. > > However, the panel reports 0x0a as the max link rate in MAX_LINK_RATE, > which exceeds the value 0x0c set in LINK_BW_SET by the firmware. > > Bottom line is, *if* we're going to support this proprietary crap of a > panel, it *must* be an isolated quirk. I certainly won't take a patch > generalizing this to any panel out there. But you're going to have to be > pretty clever to isolate this crap. I'm not sure if quirking a homebrew > extended receiver cap is going to be enough. drm_dp_read_receiver_caps() { dpcd_read(dpcd); if (quirk) { DRM_DEBUG_KMS("blah"); dpcd[MAX_BW] = 0xc; } } intel_dp_sink_rates() { ... if (max_bw > rates[i-1]) rates[i++] = max_bw; } Would seem more or less OK to me. And doing it this way would also cover the MyDP 6.75 case automagically.
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c index 2f31d226c6eb..aa3e0b5108c6 100644 --- a/drivers/gpu/drm/i915/display/intel_dp.c +++ b/drivers/gpu/drm/i915/display/intel_dp.c @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) { struct drm_i915_private *dev_priv = to_i915(dp_to_dig_port(intel_dp)->base.base.dev); + int max_rate; + u8 link_bw; /* this function is meant to be called only once */ WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0); @@ -4433,6 +4435,27 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp) else intel_dp_set_sink_rates(intel_dp); + /* + * If the firmware programmed a rate higher than the standard sink rates + * during boot, then add that rate as a valid sink rate, as fw knows + * this is a good rate and we get extra bandwidth. + * + * Helps, e.g., on the Apple MacBookPro 2017 Retina panel, which is only + * eDP 1.1, but supports the unusual rate of 324000 kHz at bootup, for + * 10 bpc / 30 bit color depth. + */ + if (!intel_dp->use_rate_select && + (drm_dp_dpcd_read(&intel_dp->aux, DP_LINK_BW_SET, &link_bw, 1) == 1) && + (link_bw > 0) && (intel_dp->num_sink_rates < DP_MAX_SUPPORTED_RATES)) { + max_rate = drm_dp_bw_code_to_link_rate(link_bw); + if (max_rate > intel_dp->sink_rates[intel_dp->num_sink_rates - 1]) { + intel_dp->sink_rates[intel_dp->num_sink_rates] = max_rate; + intel_dp->num_sink_rates++; + DRM_DEBUG_KMS("Adding max bandwidth eDP rate %d kHz.\n", + max_rate); + } + } + intel_dp_set_common_rates(intel_dp); /* Read the eDP DSC DPCD registers */
If the current eDP link rate, as read from hw, provides a higher bandwidth than the standard link rates, then add the current link rate to the link_rates array for consideration in future mode-sets. These initial current eDP link settings have been set up by firmware during boot, so they should work on the eDP panel. Therefore use them if the firmware thinks they are good and they provide higher link bandwidth, e.g., to enable higher resolutions / color depths. This fixes a problem found on the MacBookPro 2017 Retina panel: The panel reports 10 bpc color depth in its EDID, and the UEFI firmware chooses link settings at boot which support enough bandwidth for 10 bpc (324000 kbit/sec to be precise), but the DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible, so intel_dp_set_sink_rates() would cap at that. This restricts achievable color depth to 8 bpc, not providing the full color depth of the panel. With this commit, we can use firmware setting and get the full 10 bpc advertised by the Retina panel. Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> --- drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)