Message ID | 1395648818-14310-1-git-send-email-arun.r.murthy@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, Mar 24, 2014 at 01:43:38PM +0530, Arun R Murthy wrote: > In wait for vblank use usleep_range, which will use hrtimers instead of > msleep. Using msleep(1~20) there are more chances of sleeping for 20ms. > Using usleep_range uses hrtimers and hence are precise, worst case will > trigger an interrupt at the higher/max timeout. > > Signed-off-by: Arun R Murthy <arun.r.murthy@intel.com> > --- > drivers/gpu/drm/i915/intel_drv.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h > index 44067bc..079280a 100644 > --- a/drivers/gpu/drm/i915/intel_drv.h > +++ b/drivers/gpu/drm/i915/intel_drv.h > @@ -52,7 +52,7 @@ > break; \ > } \ > if (W && drm_can_sleep()) { \ > - msleep(W); \ > + usleep_range(W * 1000, W * 2 * 1000); \ > } else { \ > cpu_relax(); \ > } \ Ok. But W is still just a random value we picked for being the mininum legal value for msleep(). So just usleep_range(500, 2000) or somesuch will be fine. We can rename W to CAN_SLEEP it that helps. -Chris
> diff --git a/drivers/gpu/drm/i915/intel_drv.h > b/drivers/gpu/drm/i915/intel_drv.h > index 44067bc..079280a 100644 > --- a/drivers/gpu/drm/i915/intel_drv.h > +++ b/drivers/gpu/drm/i915/intel_drv.h > @@ -52,7 +52,7 @@ > break; \ > } \ > if (W && drm_can_sleep()) { \ > - msleep(W); \ > + usleep_range(W * 1000, W * 2 * 1000); \ > } else { \ > cpu_relax(); \ > } \ Ok. But W is still just a random value we picked for being the mininum legal value for msleep(). So just usleep_range(500, 2000) or somesuch will be fine. We can rename W to CAN_SLEEP it that helps. Ok, will take care of this in my next version of the patch. Thanks and Regards, Arun R Murthy ------------------
On Mon, 24 Mar 2014, Chris Wilson <chris@chris-wilson.co.uk> wrote: > On Mon, Mar 24, 2014 at 01:43:38PM +0530, Arun R Murthy wrote: >> In wait for vblank use usleep_range, which will use hrtimers instead of >> msleep. Using msleep(1~20) there are more chances of sleeping for 20ms. >> Using usleep_range uses hrtimers and hence are precise, worst case will >> trigger an interrupt at the higher/max timeout. >> >> Signed-off-by: Arun R Murthy <arun.r.murthy@intel.com> >> --- >> drivers/gpu/drm/i915/intel_drv.h | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h >> index 44067bc..079280a 100644 >> --- a/drivers/gpu/drm/i915/intel_drv.h >> +++ b/drivers/gpu/drm/i915/intel_drv.h >> @@ -52,7 +52,7 @@ >> break; \ >> } \ >> if (W && drm_can_sleep()) { \ >> - msleep(W); \ >> + usleep_range(W * 1000, W * 2 * 1000); \ >> } else { \ >> cpu_relax(); \ >> } \ > > Ok. But W is still just a random value we picked for being the mininum > legal value for msleep(). So just usleep_range(500, 2000) or somesuch > will be fine. We can rename W to CAN_SLEEP it that helps. We do use _wait_for directly from intel_dp.c with W == 10 to not retry so many times on what's expected to be a long wait. BR, Jani. > -Chris > > -- > Chris Wilson, Intel Open Source Technology Centre
>> diff --git a/drivers/gpu/drm/i915/intel_drv.h >> b/drivers/gpu/drm/i915/intel_drv.h >> index 44067bc..079280a 100644 >> --- a/drivers/gpu/drm/i915/intel_drv.h >> +++ b/drivers/gpu/drm/i915/intel_drv.h >> @@ -52,7 +52,7 @@ >> break; \ >> } \ >> if (W && drm_can_sleep()) { \ >> - msleep(W); \ >> + usleep_range(W * 1000, W * 2 * 1000); \ >> } else { \ >> cpu_relax(); \ >> } \ > > Ok. But W is still just a random value we picked for being the mininum > legal value for msleep(). So just usleep_range(500, 2000) or somesuch > will be fine. We can rename W to CAN_SLEEP it that helps. We do use _wait_for directly from intel_dp.c with W == 10 to not retry so many times on what's expected to be a long wait. Its not expected to be too long, we tend to get vblank every 16ms. With using usleep_range with min and max as 1-2ms, we have observed that this function consuming 4ms to 17ms. Having 10ms sleep timer, we might tend to be blocking for 6ms in some cases.(from the above data) Moreover 10ms usleep doesn't guarantee that we get a chance to execute after 10ms, it might get increased due to scheduling. So ideal solution would be to use useep_range which uses hrtimers. Thanks and Regards, Arun R Murthy ------------------
On Mon, Mar 24, 2014 at 09:34:35AM +0000, Murthy, Arun R wrote: > >> diff --git a/drivers/gpu/drm/i915/intel_drv.h > >> b/drivers/gpu/drm/i915/intel_drv.h > >> index 44067bc..079280a 100644 > >> --- a/drivers/gpu/drm/i915/intel_drv.h > >> +++ b/drivers/gpu/drm/i915/intel_drv.h > >> @@ -52,7 +52,7 @@ > >> break; \ > >> } \ > >> if (W && drm_can_sleep()) { \ > >> - msleep(W); \ > >> + usleep_range(W * 1000, W * 2 * 1000); \ > >> } else { \ > >> cpu_relax(); \ > >> } \ > > > > Ok. But W is still just a random value we picked for being the mininum > > legal value for msleep(). So just usleep_range(500, 2000) or somesuch > > will be fine. We can rename W to CAN_SLEEP it that helps. > > We do use _wait_for directly from intel_dp.c with W == 10 to not retry so many times on what's expected to be a long wait. Hmm, missed that. We can fallback to the fallback plan of switching based on W between msleep and usleep_range. Using -1, would be best. if (W && drm_can_sleep()) { if (__builtin_constant_p(W) && W < 0) usleep_range(500, 2000); else msleep(W); } > Its not expected to be too long, we tend to get vblank every 16ms. With using usleep_range > with min and max as 1-2ms, we have observed that this function consuming 4ms to 17ms. vs msleep()? That would be nice information to capture in the changelog. We expect the average wait here to be 8ms, ofc. > Having 10ms sleep timer, we might tend to be blocking for 6ms in some cases.(from the above data) > Moreover 10ms usleep doesn't guarantee that we get a chance to execute after 10ms, it might > get increased due to scheduling. So ideal solution would be to use useep_range which uses hrtimers. Ideal for what? We only use the sleeping path where we do not care too great about the latency, otherwise we use the busy-wait path. Improving the latency of the sleep without adversely affecting overall system performance/power is ideal. -Chris
On Mon, Mar 24, 2014 at 09:48:09AM +0000, Chris Wilson wrote: > On Mon, Mar 24, 2014 at 09:34:35AM +0000, Murthy, Arun R wrote: > > >> diff --git a/drivers/gpu/drm/i915/intel_drv.h > > >> b/drivers/gpu/drm/i915/intel_drv.h > > >> index 44067bc..079280a 100644 > > >> --- a/drivers/gpu/drm/i915/intel_drv.h > > >> +++ b/drivers/gpu/drm/i915/intel_drv.h > > >> @@ -52,7 +52,7 @@ > > >> break; \ > > >> } \ > > >> if (W && drm_can_sleep()) { \ > > >> - msleep(W); \ > > >> + usleep_range(W * 1000, W * 2 * 1000); \ > > >> } else { \ > > >> cpu_relax(); \ > > >> } \ > > > > > > Ok. But W is still just a random value we picked for being the mininum > > > legal value for msleep(). So just usleep_range(500, 2000) or somesuch > > > will be fine. We can rename W to CAN_SLEEP it that helps. > > > > We do use _wait_for directly from intel_dp.c with W == 10 to not retry so many times on what's expected to be a long wait. > > Hmm, missed that. We can fallback to the fallback plan of switching > based on W between msleep and usleep_range. Using -1, would be best. > > if (W && drm_can_sleep()) { > if (__builtin_constant_p(W) && W < 0) > usleep_range(500, 2000); > else > msleep(W); > } > > > Its not expected to be too long, we tend to get vblank every 16ms. With using usleep_range > > with min and max as 1-2ms, we have observed that this function consuming 4ms to 17ms. > > vs msleep()? That would be nice information to capture in the changelog. > We expect the average wait here to be 8ms, ofc. > > > Having 10ms sleep timer, we might tend to be blocking for 6ms in some cases.(from the above data) > > Moreover 10ms usleep doesn't guarantee that we get a chance to execute after 10ms, it might > > get increased due to scheduling. So ideal solution would be to use useep_range which uses hrtimers. > > Ideal for what? We only use the sleeping path where we do not care too > great about the latency, otherwise we use the busy-wait path. Improving > the latency of the sleep without adversely affecting overall system > performance/power is ideal. We've used it in a few places where we'd care a bit more about latency (like edid reads) but on most platforms those switched to interrupts now. -Daniel
> > > Ok. But W is still just a random value we picked for being the > > > mininum legal value for msleep(). So just usleep_range(500, 2000) > > > or somesuch will be fine. We can rename W to CAN_SLEEP it that helps. > > > > We do use _wait_for directly from intel_dp.c with W == 10 to not retry so many times on what's expected to be a long wait. > > Hmm, missed that. We can fallback to the fallback plan of switching > based on W between msleep and usleep_range. Using -1, would be best. > > if (W && drm_can_sleep()) { > if (__builtin_constant_p(W) && W < 0) > usleep_range(500, 2000); > else > msleep(W); > } > Seems to be a good approach, will take care of this in the next version of the patch. > > Its not expected to be too long, we tend to get vblank every 16ms. > > With using usleep_range with min and max as 1-2ms, we have observed that this function consuming 4ms to 17ms. > > vs msleep()? That would be nice information to capture in the changelog. > We expect the average wait here to be 8ms, ofc. > Sure, will add this in the changelog. > > Having 10ms sleep timer, we might tend to be blocking for 6ms in > > some cases.(from the above data) Moreover 10ms usleep doesn't > > guarantee that we get a chance to execute after 10ms, it might get increased due to scheduling. So ideal solution would be to use useep_range which uses hrtimers. > > Ideal for what? We only use the sleeping path where we do not care too > great about the latency, otherwise we use the busy-wait path. > Improving the latency of the sleep without adversely affecting overall > system performance/power is ideal. We've used it in a few places where we'd care a bit more about latency (like edid reads) but on most platforms those switched to interrupts now. -Daniel Thanks and Regards, Arun R Murthy ------------------
On Mon, Mar 24, 2014 at 09:34:35AM +0000, Murthy, Arun R wrote: > >> diff --git a/drivers/gpu/drm/i915/intel_drv.h > >> b/drivers/gpu/drm/i915/intel_drv.h > >> index 44067bc..079280a 100644 > >> --- a/drivers/gpu/drm/i915/intel_drv.h > >> +++ b/drivers/gpu/drm/i915/intel_drv.h > >> @@ -52,7 +52,7 @@ > >> break; \ > >> } \ > >> if (W && drm_can_sleep()) { \ > >> - msleep(W); \ > >> + usleep_range(W * 1000, W * 2 * 1000); \ > >> } else { \ > >> cpu_relax(); \ > >> } \ > > > > Ok. But W is still just a random value we picked for being the mininum > > legal value for msleep(). So just usleep_range(500, 2000) or somesuch > > will be fine. We can rename W to CAN_SLEEP it that helps. > > We do use _wait_for directly from intel_dp.c with W == 10 to not retry so many times on what's expected to be a long wait. > > Its not expected to be too long, we tend to get vblank every 16ms. With using usleep_range > with min and max as 1-2ms, we have observed that this function consuming 4ms to 17ms. > Having 10ms sleep timer, we might tend to be blocking for 6ms in some cases.(from the above data) > Moreover 10ms usleep doesn't guarantee that we get a chance to execute after 10ms, it might > get increased due to scheduling. So ideal solution would be to use useep_range which uses hrtimers. Can you please do proper quoting when replying? It makes it fairly hard to figure out what's your response when you reply in-line, especially in more complicated threads. And top-posting isn't an option either, especially for more in-depth discussions where the different issues need to be treated as separate points. You need a real mail client for this, Outlook won't cut it. Jesse is floating slides internal with all the details and tons of links to resources. Thanks, Daniel
> Its not expected to be too long, we tend to get vblank every 16ms. > With using usleep_range with min and max as 1-2ms, we have observed that this function consuming 4ms to 17ms. > Having 10ms sleep timer, we might tend to be blocking for 6ms in some > cases.(from the above data) Moreover 10ms usleep doesn't guarantee > that we get a chance to execute after 10ms, it might get increased due to scheduling. So ideal solution would be to use useep_range which uses hrtimers. Can you please do proper quoting when replying? It makes it fairly hard to figure out what's your response when you reply in-line, especially in more complicated threads. And top-posting isn't an option either, especially for more in-depth discussions where the different issues need to be treated as separate points. You need a real mail client for this, Outlook won't cut it. Jesse is floating slides internal with all the details and tons of links to resources. Sure will take care of this in future. Thanks and Regards, Arun R Murthy ------------------
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index 44067bc..079280a 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -52,7 +52,7 @@ break; \ } \ if (W && drm_can_sleep()) { \ - msleep(W); \ + usleep_range(W * 1000, W * 2 * 1000); \ } else { \ cpu_relax(); \ } \
In wait for vblank use usleep_range, which will use hrtimers instead of msleep. Using msleep(1~20) there are more chances of sleeping for 20ms. Using usleep_range uses hrtimers and hence are precise, worst case will trigger an interrupt at the higher/max timeout. Signed-off-by: Arun R Murthy <arun.r.murthy@intel.com> --- drivers/gpu/drm/i915/intel_drv.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)