diff mbox

[10/12] drm/i915: Update link training automated test function for Displayport compliance

Message ID 1405365047-6866-11-git-send-email-tprevite@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Todd Previte July 14, 2014, 7:10 p.m. UTC
Implements basic link training functionality for Displayport automated compliance
testing.

Signed-off-by: Todd Previte <tprevite@gmail.com>
---
 drivers/gpu/drm/i915/intel_dp.c | 47 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

Comments

Daniel Vetter July 22, 2014, 6:15 a.m. UTC | #1
On Mon, Jul 14, 2014 at 12:10:45PM -0700, Todd Previte wrote:
> Implements basic link training functionality for Displayport automated compliance
> testing.
> 
> Signed-off-by: Todd Previte <tprevite@gmail.com>
> ---
>  drivers/gpu/drm/i915/intel_dp.c | 47 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 47 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 65830e9..e4b31ad 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -3427,6 +3427,53 @@ static uint8_t
>  intel_dp_autotest_link_training(struct intel_dp *intel_dp)
>  {
>  	uint8_t test_result = DP_TEST_NAK;
> +	uint8_t rxdata[2];
> +	uint8_t link_status[DP_LINK_STATUS_SIZE];
> +	int bytes_ret = 0;
> +	struct drm_connector *connector = &intel_dp->attached_connector->base;
> +	struct intel_digital_port *intel_dig_port =
> +		enc_to_dig_port(&intel_dp->attached_connector->encoder->base);
> +
> +	/* Read test parameters */
> +	bytes_ret = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_LINK_RATE, rxdata, 2);
> +
> +	/* Set link rate directly */
> +	intel_dp->link_bw = rxdata[0];
> +	/* Preserve 7:5 when setting lane count */
> +	intel_dp->lane_count &= 0xE0;
> +	intel_dp->lane_count |= rxdata[1];

This won't work - if you change the link config you need to do a full
recomputation of the modeset config and a full modeset. No, we dont (yet)
have the infrastructure for this, which is why dp is such a pain since we
can change the link config once we've decided on something :(
-Daniel

> +
> +	DRM_DEBUG_KMS("Displayport: Link training testing - %d lanes @ %02x link rate\n",
> +			intel_dp->lane_count, intel_dp->link_bw);
> +
> +	/* Train the link */
> +	intel_dp->DP = intel_dig_port->saved_port_bits |
> +					DDI_BUF_CTL_ENABLE |
> +					DDI_BUF_EMP_400MV_0DB_HSW;
> +	intel_dp->DP |= DDI_PORT_WIDTH(intel_dp->lane_count);
> +
> +	if (!intel_dp_start_link_train(intel_dp)) {
> +		DRM_DEBUG_KMS("Displayport: intel_dp_start_link_train() failed\n");
> +		test_result = false;
> +	}
> +	if (!intel_dp_complete_link_train(intel_dp)) {
> +		DRM_DEBUG_KMS("Displayport: intel_dp_complete_link_train() failed\n");
> +		test_result = false;
> +	}
> +	if (!intel_dp_stop_link_train(intel_dp)) {
> +		DRM_DEBUG_KMS("Displayport: intel_dp_stop_link_train() failed\n");
> +		test_result = false;
> +	}
> +
> +	/* Check link status for successful completion */
> +	if (drm_dp_channel_eq_ok(link_status, intel_dp->lane_count))
> +		test_result = true;
> +
> +	if (test_result == false) {
> +		/* Clear link training here */
> +		intel_dp_set_idle_link_train(intel_dp);
> +	}
> +
>  	return test_result;
>  }
>  
> -- 
> 1.9.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
Jesse Barnes July 22, 2014, 8:40 p.m. UTC | #2
On Tue, 22 Jul 2014 08:15:30 +0200
Daniel Vetter <daniel@ffwll.ch> wrote:

> On Mon, Jul 14, 2014 at 12:10:45PM -0700, Todd Previte wrote:
> > Implements basic link training functionality for Displayport automated compliance
> > testing.
> > 
> > Signed-off-by: Todd Previte <tprevite@gmail.com>
> > ---
> >  drivers/gpu/drm/i915/intel_dp.c | 47 +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 47 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> > index 65830e9..e4b31ad 100644
> > --- a/drivers/gpu/drm/i915/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > @@ -3427,6 +3427,53 @@ static uint8_t
> >  intel_dp_autotest_link_training(struct intel_dp *intel_dp)
> >  {
> >  	uint8_t test_result = DP_TEST_NAK;
> > +	uint8_t rxdata[2];
> > +	uint8_t link_status[DP_LINK_STATUS_SIZE];
> > +	int bytes_ret = 0;
> > +	struct drm_connector *connector = &intel_dp->attached_connector->base;
> > +	struct intel_digital_port *intel_dig_port =
> > +		enc_to_dig_port(&intel_dp->attached_connector->encoder->base);
> > +
> > +	/* Read test parameters */
> > +	bytes_ret = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_LINK_RATE, rxdata, 2);
> > +
> > +	/* Set link rate directly */
> > +	intel_dp->link_bw = rxdata[0];
> > +	/* Preserve 7:5 when setting lane count */
> > +	intel_dp->lane_count &= 0xE0;
> > +	intel_dp->lane_count |= rxdata[1];
> 
> This won't work - if you change the link config you need to do a full
> recomputation of the modeset config and a full modeset. No, we dont (yet)
> have the infrastructure for this, which is why dp is such a pain since we
> can change the link config once we've decided on something :(

I think that depends on how we're going to structure the code.  For
simply calling the link train functions, it looks like messing with the
intel_dp->foo fields will roughly do what Todd wants, which is to
simply try a re-train with a different set of params, ignoring the
actual fb and pipe configuration.

Or is that what you had in mind?  You'd like to see valid data and mode
timings to drive a given lane count and bw instead?  I'm not sure the
testing spec cares about that; Todd?
Daniel Vetter July 22, 2014, 8:44 p.m. UTC | #3
On Tue, Jul 22, 2014 at 10:40 PM, Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
>> > +   /* Set link rate directly */
>> > +   intel_dp->link_bw = rxdata[0];
>> > +   /* Preserve 7:5 when setting lane count */
>> > +   intel_dp->lane_count &= 0xE0;
>> > +   intel_dp->lane_count |= rxdata[1];
>>
>> This won't work - if you change the link config you need to do a full
>> recomputation of the modeset config and a full modeset. No, we dont (yet)
>> have the infrastructure for this, which is why dp is such a pain since we
>> can change the link config once we've decided on something :(
>
> I think that depends on how we're going to structure the code.  For
> simply calling the link train functions, it looks like messing with the
> intel_dp->foo fields will roughly do what Todd wants, which is to
> simply try a re-train with a different set of params, ignoring the
> actual fb and pipe configuration.
>
> Or is that what you had in mind?  You'd like to see valid data and mode
> timings to drive a given lane count and bw instead?  I'm not sure the
> testing spec cares about that; Todd?

If you change the link_bw then you need to do a full modeset sequence
since that's the only way to convince our hw to switch to the new
clock. The current code in this patch falls short and only retrains
the link.

But even if we'd do the full modeset the compute_config stage would
overwrite the link parameters again. So the correct way to do this is
to pimp the compute config code to have limits (we might as well
expose them as connector properties while at it for debugging) and
then do a full modeset sequence here. Then we could finally implement
(with a bit more code) the transparent fallback to a different config
if the first one doesn't work.
-Daniel
Jesse Barnes July 22, 2014, 9:03 p.m. UTC | #4
On Tue, 22 Jul 2014 22:44:54 +0200
Daniel Vetter <daniel@ffwll.ch> wrote:

> On Tue, Jul 22, 2014 at 10:40 PM, Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
> >> > +   /* Set link rate directly */
> >> > +   intel_dp->link_bw = rxdata[0];
> >> > +   /* Preserve 7:5 when setting lane count */
> >> > +   intel_dp->lane_count &= 0xE0;
> >> > +   intel_dp->lane_count |= rxdata[1];
> >>
> >> This won't work - if you change the link config you need to do a full
> >> recomputation of the modeset config and a full modeset. No, we dont (yet)
> >> have the infrastructure for this, which is why dp is such a pain since we
> >> can change the link config once we've decided on something :(
> >
> > I think that depends on how we're going to structure the code.  For
> > simply calling the link train functions, it looks like messing with the
> > intel_dp->foo fields will roughly do what Todd wants, which is to
> > simply try a re-train with a different set of params, ignoring the
> > actual fb and pipe configuration.
> >
> > Or is that what you had in mind?  You'd like to see valid data and mode
> > timings to drive a given lane count and bw instead?  I'm not sure the
> > testing spec cares about that; Todd?
> 
> If you change the link_bw then you need to do a full modeset sequence
> since that's the only way to convince our hw to switch to the new
> clock. The current code in this patch falls short and only retrains
> the link.
> 
> But even if we'd do the full modeset the compute_config stage would
> overwrite the link parameters again. So the correct way to do this is
> to pimp the compute config code to have limits (we might as well
> expose them as connector properties while at it for debugging) and
> then do a full modeset sequence here. Then we could finally implement
> (with a bit more code) the transparent fallback to a different config
> if the first one doesn't work.

Right, yeah going through the full sequence sounds like it's required
for most of the tests based on what Todd tells me, since the test sink
will look at the active mode transmitted etc.

So the simple re-train and send misc isn't sufficient, even if the
training succeeds.
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 65830e9..e4b31ad 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -3427,6 +3427,53 @@  static uint8_t
 intel_dp_autotest_link_training(struct intel_dp *intel_dp)
 {
 	uint8_t test_result = DP_TEST_NAK;
+	uint8_t rxdata[2];
+	uint8_t link_status[DP_LINK_STATUS_SIZE];
+	int bytes_ret = 0;
+	struct drm_connector *connector = &intel_dp->attached_connector->base;
+	struct intel_digital_port *intel_dig_port =
+		enc_to_dig_port(&intel_dp->attached_connector->encoder->base);
+
+	/* Read test parameters */
+	bytes_ret = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_LINK_RATE, rxdata, 2);
+
+	/* Set link rate directly */
+	intel_dp->link_bw = rxdata[0];
+	/* Preserve 7:5 when setting lane count */
+	intel_dp->lane_count &= 0xE0;
+	intel_dp->lane_count |= rxdata[1];
+
+	DRM_DEBUG_KMS("Displayport: Link training testing - %d lanes @ %02x link rate\n",
+			intel_dp->lane_count, intel_dp->link_bw);
+
+	/* Train the link */
+	intel_dp->DP = intel_dig_port->saved_port_bits |
+					DDI_BUF_CTL_ENABLE |
+					DDI_BUF_EMP_400MV_0DB_HSW;
+	intel_dp->DP |= DDI_PORT_WIDTH(intel_dp->lane_count);
+
+	if (!intel_dp_start_link_train(intel_dp)) {
+		DRM_DEBUG_KMS("Displayport: intel_dp_start_link_train() failed\n");
+		test_result = false;
+	}
+	if (!intel_dp_complete_link_train(intel_dp)) {
+		DRM_DEBUG_KMS("Displayport: intel_dp_complete_link_train() failed\n");
+		test_result = false;
+	}
+	if (!intel_dp_stop_link_train(intel_dp)) {
+		DRM_DEBUG_KMS("Displayport: intel_dp_stop_link_train() failed\n");
+		test_result = false;
+	}
+
+	/* Check link status for successful completion */
+	if (drm_dp_channel_eq_ok(link_status, intel_dp->lane_count))
+		test_result = true;
+
+	if (test_result == false) {
+		/* Clear link training here */
+		intel_dp_set_idle_link_train(intel_dp);
+	}
+
 	return test_result;
 }