Message ID | 1478755950-2778-1-git-send-email-dhinakaran.pandiyan@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Nov 09, 2016 at 09:32:29PM -0800, Dhinakaran Pandiyan wrote: > We store DP link rates as link clock frequencies in kHz, just like all > other clock values. But, DP link rates in the DP Spec are expressed in > Gbps/lane, which seems to have led to some confusion. > > E.g., for HBR2 > Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps > where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion > > Using link clock frequency, like we do > Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s > Because, each symbol has 8 bit of data, this is 2160000 kBps > and there is no need to account for channel encoding here. > > But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps > > Similarly, while computing the required link bandwidth for a mode, > there is a mysterious 1/10 term. > This should simply be pixel_clock kHz * bpp * 1/8 to give the final > result in kBps > > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> > --- > drivers/gpu/drm/i915/intel_dp.c | 28 +++++++++------------------- > 1 file changed, 9 insertions(+), 19 deletions(-) > > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c > index 8f313c1..7a9e122 100644 > --- a/drivers/gpu/drm/i915/intel_dp.c > +++ b/drivers/gpu/drm/i915/intel_dp.c > @@ -161,33 +161,23 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp) > return min(source_max, sink_max); > } > > -/* > - * The units on the numbers in the next two are... bizarre. Examples will > - * make it clearer; this one parallels an example in the eDP spec. > - * > - * intel_dp_max_data_rate for one lane of 2.7GHz evaluates as: > - * > - * 270000 * 1 * 8 / 10 == 216000 > - * > - * The actual data capacity of that configuration is 2.16Gbit/s, so the > - * units are decakilobits. ->clock in a drm_display_mode is in kilohertz - > - * or equivalently, kilopixels per second - so for 1680x1050R it'd be > - * 119000. At 18bpp that's 2142000 kilobits per second. > - * > - * Thus the strange-looking division by 10 in intel_dp_link_required, to > - * get the result in decakilobits instead of kilobits. > - */ > - > static int > intel_dp_link_required(int pixel_clock, int bpp) > { > - return (pixel_clock * bpp + 9) / 10; > + /* pixel_clock is in kHz, divide bpp by 8 to return the value in kBps*/ > + return (pixel_clock * bpp + 7) / 8; > } > > static int > intel_dp_max_data_rate(int max_link_clock, int max_lanes) > { > - return (max_link_clock * max_lanes * 8) / 10; > + /* max_link_clock is the link symbol clock (LS_Clk) in kHz and not the > + * link rate that is generally expressed in Gbps. Since, 8 bits data is > + * transmitted every LS_Clk per lane, there is no need to account for > + * the channel encoding that is done in the PHY layer here. > + */ > + Max Link is the max link rate for the actual physical link of the DP cable. So eventually PHY layer is going to encode the bits and generate 10 bits for every 8 bits, so the code rate will be 8/10 and the useful net rate (sending useful bits) will be link_rate * code rate = link_rate * 8/10. So the max available link rate at the link layer should be this useful net rate and so IMHO we should consider this channel encoding into account. Manasi > + return (max_link_clock * max_lanes); > } > > static int > -- > 2.7.4 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On Thu, 2016-11-10 at 15:55 -0800, Manasi Navare wrote: > On Wed, Nov 09, 2016 at 09:32:29PM -0800, Dhinakaran Pandiyan wrote: > > We store DP link rates as link clock frequencies in kHz, just like all > > other clock values. But, DP link rates in the DP Spec are expressed in > > Gbps/lane, which seems to have led to some confusion. > > > > E.g., for HBR2 > > Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps > > where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion > > > > Using link clock frequency, like we do > > Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s > > Because, each symbol has 8 bit of data, this is 2160000 kBps > > and there is no need to account for channel encoding here. > > > > But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps > > > > Similarly, while computing the required link bandwidth for a mode, > > there is a mysterious 1/10 term. > > This should simply be pixel_clock kHz * bpp * 1/8 to give the final > > result in kBps > > > > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> > > --- > > drivers/gpu/drm/i915/intel_dp.c | 28 +++++++++------------------- > > 1 file changed, 9 insertions(+), 19 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c > > index 8f313c1..7a9e122 100644 > > --- a/drivers/gpu/drm/i915/intel_dp.c > > +++ b/drivers/gpu/drm/i915/intel_dp.c > > @@ -161,33 +161,23 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp) > > return min(source_max, sink_max); > > } > > > > -/* > > - * The units on the numbers in the next two are... bizarre. Examples will > > - * make it clearer; this one parallels an example in the eDP spec. > > - * > > - * intel_dp_max_data_rate for one lane of 2.7GHz evaluates as: > > - * > > - * 270000 * 1 * 8 / 10 == 216000 > > - * > > - * The actual data capacity of that configuration is 2.16Gbit/s, so the > > - * units are decakilobits. ->clock in a drm_display_mode is in kilohertz - > > - * or equivalently, kilopixels per second - so for 1680x1050R it'd be > > - * 119000. At 18bpp that's 2142000 kilobits per second. > > - * > > - * Thus the strange-looking division by 10 in intel_dp_link_required, to > > - * get the result in decakilobits instead of kilobits. > > - */ > > - > > static int > > intel_dp_link_required(int pixel_clock, int bpp) > > { > > - return (pixel_clock * bpp + 9) / 10; > > + /* pixel_clock is in kHz, divide bpp by 8 to return the value in kBps*/ > > + return (pixel_clock * bpp + 7) / 8; > > } > > > > static int > > intel_dp_max_data_rate(int max_link_clock, int max_lanes) > > { > > - return (max_link_clock * max_lanes * 8) / 10; > > + /* max_link_clock is the link symbol clock (LS_Clk) in kHz and not the > > + * link rate that is generally expressed in Gbps. Since, 8 bits data is > > + * transmitted every LS_Clk per lane, there is no need to account for > > + * the channel encoding that is done in the PHY layer here. > > + */ > > + > > Max Link is the max link rate for the actual physical link of the DP cable. The units are quite a mess, as the existing comments indicate. For e.g., see 1. intel_dp_link_required() ""* The units on the numbers in the next two are... bizarre. ..." 2. intel_read_sink_rates() " /* Value read is in kHz while drm clock is saved in deca-kHz */" This has led to confusing /10's to just match the units to deca Hz and deca kHz. I believe, the confusion is due to the fact that, 1. eDP DPCD registers have link rates that have to be multiplied with 200kHz to get the final value in kHz. This is NOT the link symbol clock. e.g., DPCD SUPPORTED_LINK_RATES for HBR2, the value will be 27000 or 6978h (from DP Spec) To get the bit rate capacity = Link rate * Num. lanes * 8/10 = 27000*200kHz * 4 * 8/10 = 17280000 kbits per second or = 2160000 kBytes per second But, we arbitrarily divide by 10 in intel_edp_init_dpcd() "/* Value read is in kHz while drm clock is saved in deca-kHz */ intel_dp->sink_rates[i] = (val * 200) / 10; " 2. Let's look at source rates. static const int bxt_rates[] = { 162000, 216000, 243000, 270000, 324000, 432000, 540000 }; static const int skl_rates[] = { 162000, 216000, 270000, 324000, 432000, 540000 }; static const int default_rates[] = { 162000, 270000, 540000 }; These cannot be link rates in Gbps nor link rate expressed in kHz. If it is the link rate, HBR2 should be 5400000 Gbps or 54000000 kHz But, the value is 540000. That's because, this is the Link Symbol Clock (LS_Clk) in kHz If you look at commit a4fc5ed69817c73e32571ad7837bb707f9890009 Author: Keith Packard <keithp@keithp.com> Date: Tue Apr 7 16:16:42 2009 -0700 drm/i915: Add Display Port support you will see that the units all make sense, but somewhere in between we have messed up the conversions. To check if a mode is valid, the comparison should be between, mode_clock(kHz)*bpp/8 and link_clock(kHz)*lanes , the units being kBps since 1 symbol corresponds to 1 byte of data. -DK > So eventually PHY layer is going to encode the bits and generate 10 bits > for every 8 bits, so the code rate will be 8/10 and the useful net rate (sending > useful bits) will be link_rate * code rate = link_rate * 8/10. > So the max available link rate at the link layer should be this useful net rate > and so IMHO we should consider this channel encoding into account. > > Manasi > > + return (max_link_clock * max_lanes); > > } > > > > static int > > -- > > 2.7.4 > > > > _______________________________________________ > > Intel-gfx mailing list > > Intel-gfx@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On Wed, Nov 09, 2016 at 09:32:29PM -0800, Dhinakaran Pandiyan wrote: > We store DP link rates as link clock frequencies in kHz, just like all > other clock values. But, DP link rates in the DP Spec are expressed in > Gbps/lane, which seems to have led to some confusion. > > E.g., for HBR2 > Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps > where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion > > Using link clock frequency, like we do > Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s > Because, each symbol has 8 bit of data, this is 2160000 kBps > and there is no need to account for channel encoding here. > > But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps > > Similarly, while computing the required link bandwidth for a mode, > there is a mysterious 1/10 term. > This should simply be pixel_clock kHz * bpp * 1/8 to give the final > result in kBps > > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> > --- > drivers/gpu/drm/i915/intel_dp.c | 28 +++++++++------------------- > 1 file changed, 9 insertions(+), 19 deletions(-) > > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c > index 8f313c1..7a9e122 100644 > --- a/drivers/gpu/drm/i915/intel_dp.c > +++ b/drivers/gpu/drm/i915/intel_dp.c > @@ -161,33 +161,23 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp) > return min(source_max, sink_max); > } > > -/* > - * The units on the numbers in the next two are... bizarre. Examples will > - * make it clearer; this one parallels an example in the eDP spec. > - * > - * intel_dp_max_data_rate for one lane of 2.7GHz evaluates as: > - * > - * 270000 * 1 * 8 / 10 == 216000 > - * > - * The actual data capacity of that configuration is 2.16Gbit/s, so the > - * units are decakilobits. ->clock in a drm_display_mode is in kilohertz - > - * or equivalently, kilopixels per second - so for 1680x1050R it'd be > - * 119000. At 18bpp that's 2142000 kilobits per second. > - * > - * Thus the strange-looking division by 10 in intel_dp_link_required, to > - * get the result in decakilobits instead of kilobits. > - */ > - > static int > intel_dp_link_required(int pixel_clock, int bpp) > { > - return (pixel_clock * bpp + 9) / 10; > + /* pixel_clock is in kHz, divide bpp by 8 to return the value in kBps*/ Probably best not to mix in the kBps unit here and instead just talk in terms of the symbol clock. > + return (pixel_clock * bpp + 7) / 8; DIV_ROUND_UP() > } > > static int > intel_dp_max_data_rate(int max_link_clock, int max_lanes) > { > - return (max_link_clock * max_lanes * 8) / 10; > + /* max_link_clock is the link symbol clock (LS_Clk) in kHz and not the > + * link rate that is generally expressed in Gbps. Since, 8 bits data is > + * transmitted every LS_Clk per lane, there is no need to account for > + * the channel encoding that is done in the PHY layer here. > + */ > + > + return (max_link_clock * max_lanes); Useless parens. > } > > static int > -- > 2.7.4
On Fri, 2016-11-11 at 15:39 +0200, Ville Syrjälä wrote: > On Wed, Nov 09, 2016 at 09:32:29PM -0800, Dhinakaran Pandiyan wrote: > > We store DP link rates as link clock frequencies in kHz, just like all > > other clock values. But, DP link rates in the DP Spec are expressed in > > Gbps/lane, which seems to have led to some confusion. > > > > E.g., for HBR2 > > Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps > > where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion > > > > Using link clock frequency, like we do > > Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s > > Because, each symbol has 8 bit of data, this is 2160000 kBps > > and there is no need to account for channel encoding here. > > > > But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps > > > > Similarly, while computing the required link bandwidth for a mode, > > there is a mysterious 1/10 term. > > This should simply be pixel_clock kHz * bpp * 1/8 to give the final > > result in kBps > > > > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> > > --- > > drivers/gpu/drm/i915/intel_dp.c | 28 +++++++++------------------- > > 1 file changed, 9 insertions(+), 19 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c > > index 8f313c1..7a9e122 100644 > > --- a/drivers/gpu/drm/i915/intel_dp.c > > +++ b/drivers/gpu/drm/i915/intel_dp.c > > @@ -161,33 +161,23 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp) > > return min(source_max, sink_max); > > } > > > > -/* > > - * The units on the numbers in the next two are... bizarre. Examples will > > - * make it clearer; this one parallels an example in the eDP spec. > > - * > > - * intel_dp_max_data_rate for one lane of 2.7GHz evaluates as: > > - * > > - * 270000 * 1 * 8 / 10 == 216000 > > - * > > - * The actual data capacity of that configuration is 2.16Gbit/s, so the > > - * units are decakilobits. ->clock in a drm_display_mode is in kilohertz - > > - * or equivalently, kilopixels per second - so for 1680x1050R it'd be > > - * 119000. At 18bpp that's 2142000 kilobits per second. > > - * > > - * Thus the strange-looking division by 10 in intel_dp_link_required, to > > - * get the result in decakilobits instead of kilobits. > > - */ > > - > > static int > > intel_dp_link_required(int pixel_clock, int bpp) > > { > > - return (pixel_clock * bpp + 9) / 10; > > + /* pixel_clock is in kHz, divide bpp by 8 to return the value in kBps*/ > > Probably best not to mix in the kBps unit here and instead just talk > in terms of the symbol clock. > > > + return (pixel_clock * bpp + 7) / 8; > > DIV_ROUND_UP() > > > } > > > > static int > > intel_dp_max_data_rate(int max_link_clock, int max_lanes) > > { > > - return (max_link_clock * max_lanes * 8) / 10; > > + /* max_link_clock is the link symbol clock (LS_Clk) in kHz and not the > > + * link rate that is generally expressed in Gbps. Since, 8 bits data is > > + * transmitted every LS_Clk per lane, there is no need to account for > > + * the channel encoding that is done in the PHY layer here. > > + */ > > + > > + return (max_link_clock * max_lanes); > > Useless parens. > > Thanks for the review, will fix these two lines. -DK > > } > > > > static int > > -- > > 2.7.4 >
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c index 8f313c1..7a9e122 100644 --- a/drivers/gpu/drm/i915/intel_dp.c +++ b/drivers/gpu/drm/i915/intel_dp.c @@ -161,33 +161,23 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp) return min(source_max, sink_max); } -/* - * The units on the numbers in the next two are... bizarre. Examples will - * make it clearer; this one parallels an example in the eDP spec. - * - * intel_dp_max_data_rate for one lane of 2.7GHz evaluates as: - * - * 270000 * 1 * 8 / 10 == 216000 - * - * The actual data capacity of that configuration is 2.16Gbit/s, so the - * units are decakilobits. ->clock in a drm_display_mode is in kilohertz - - * or equivalently, kilopixels per second - so for 1680x1050R it'd be - * 119000. At 18bpp that's 2142000 kilobits per second. - * - * Thus the strange-looking division by 10 in intel_dp_link_required, to - * get the result in decakilobits instead of kilobits. - */ - static int intel_dp_link_required(int pixel_clock, int bpp) { - return (pixel_clock * bpp + 9) / 10; + /* pixel_clock is in kHz, divide bpp by 8 to return the value in kBps*/ + return (pixel_clock * bpp + 7) / 8; } static int intel_dp_max_data_rate(int max_link_clock, int max_lanes) { - return (max_link_clock * max_lanes * 8) / 10; + /* max_link_clock is the link symbol clock (LS_Clk) in kHz and not the + * link rate that is generally expressed in Gbps. Since, 8 bits data is + * transmitted every LS_Clk per lane, there is no need to account for + * the channel encoding that is done in the PHY layer here. + */ + + return (max_link_clock * max_lanes); } static int
We store DP link rates as link clock frequencies in kHz, just like all other clock values. But, DP link rates in the DP Spec are expressed in Gbps/lane, which seems to have led to some confusion. E.g., for HBR2 Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion Using link clock frequency, like we do Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s Because, each symbol has 8 bit of data, this is 2160000 kBps and there is no need to account for channel encoding here. But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps Similarly, while computing the required link bandwidth for a mode, there is a mysterious 1/10 term. This should simply be pixel_clock kHz * bpp * 1/8 to give the final result in kBps Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> --- drivers/gpu/drm/i915/intel_dp.c | 28 +++++++++------------------- 1 file changed, 9 insertions(+), 19 deletions(-)