Message ID | 55282C1F.3000600@free.fr (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Apr 10, 2015 at 10:01:35PM +0200, Mason wrote: > There is, however, an important difference between loop-based > delays and timer-based delays; CPU frequencies typically fall > in the 50-5000 MHz range, while timer frequencies typically > span tens of kHz up to hundreds of MHz. For example, 90 kHz > is sometimes provided in multimedia systems (MPEG TS). Why would you want to use such a slowly clocked counter for something which is supposed to be able to produce delays in the micro-second and potentially the nanosecond range? get_cycles(), which is what the timer based delay is based upon, is supposed to be a _high resolution counter_, preferably running at the same kind of speeds as the CPU, though with a fixed clock rate. It most definitely is not supposed to be in the kHz range.
On 10/04/2015 22:42, Russell King - ARM Linux wrote: > On Fri, Apr 10, 2015 at 10:01:35PM +0200, Mason wrote: >> There is, however, an important difference between loop-based >> delays and timer-based delays; CPU frequencies typically fall >> in the 50-5000 MHz range, while timer frequencies typically >> span tens of kHz up to hundreds of MHz. For example, 90 kHz >> is sometimes provided in multimedia systems (MPEG TS). > > Why would you want to use such a slowly clocked counter for something > which is supposed to be able to produce delays in the micro-second and > potentially the nanosecond range? > > get_cycles(), which is what the timer based delay is based upon, is > supposed to be a _high resolution counter_, preferably running at > the same kind of speeds as the CPU, though with a fixed clock rate. > It most definitely is not supposed to be in the kHz range. If there's only a single fixed clock in the system, I'd use it for sched_clock, clocksource, and timer delay. Are there other options? It was you who wrote some time ago: "Timers are preferred because of the problems with the software delay loop." (My system implements DVFS.) It seems to me that a 90 kHz timer is still better than the jiffy counter, or am I mistaken again? Regards.
On Fri, Apr 10, 2015 at 11:22:56PM +0200, Mason wrote: > On 10/04/2015 22:42, Russell King - ARM Linux wrote: > > On Fri, Apr 10, 2015 at 10:01:35PM +0200, Mason wrote: > >> There is, however, an important difference between loop-based > >> delays and timer-based delays; CPU frequencies typically fall > >> in the 50-5000 MHz range, while timer frequencies typically > >> span tens of kHz up to hundreds of MHz. For example, 90 kHz > >> is sometimes provided in multimedia systems (MPEG TS). > > > > Why would you want to use such a slowly clocked counter for something > > which is supposed to be able to produce delays in the micro-second and > > potentially the nanosecond range? > > > > get_cycles(), which is what the timer based delay is based upon, is > > supposed to be a _high resolution counter_, preferably running at > > the same kind of speeds as the CPU, though with a fixed clock rate. > > It most definitely is not supposed to be in the kHz range. > > If there's only a single fixed clock in the system, I'd > use it for sched_clock, clocksource, and timer delay. > Are there other options? > > It was you who wrote some time ago: "Timers are preferred > because of the problems with the software delay loop." > (My system implements DVFS.) > > It seems to me that a 90 kHz timer is still better than > the jiffy counter, or am I mistaken again? Given the choice of a 90kHz timer vs using a calibrated software delay loop, the software delay loop wins. I never envisioned that someone would be silly enough to think that a 90kHz timer would somehow be suitable to replace a software delay loop calibrated against a timer.
On 11/04/2015 09:30, Russell King - ARM Linux wrote: > On Fri, Apr 10, 2015 at 11:22:56PM +0200, Mason wrote: > >> It was you who wrote some time ago: "Timers are preferred >> because of the problems with the software delay loop." >> (My system implements DVFS.) >> >> It seems to me that a 90 kHz timer is still better than >> the jiffy counter, or am I mistaken again? > > Given the choice of a 90kHz timer vs using a calibrated software > delay loop, the software delay loop wins. I never envisioned that > someone would be silly enough to think I'm full of surprises. > that a 90kHz timer would somehow be suitable to replace a software > delay loop calibrated against a timer. Only one message ago, you were arguing that loop-based delays could be up to 50% inaccurate. Thus, if one wanted to spin for 500 µs, they'd have to request 1 ms just to be sure. An 11 µs accuracy looks like a better deal to me, overall. Add DVFS to the mix, and that 500 µs loop-based delay turns into a 50 µs delay when the other core decides to boost the cluster from 100 MHz to 1 GHz. And then drivers break randomly. Regards.
On Sat, Apr 11, 2015 at 01:57:12PM +0200, Mason wrote: > On 11/04/2015 09:30, Russell King - ARM Linux wrote: > > > On Fri, Apr 10, 2015 at 11:22:56PM +0200, Mason wrote: > > > >> It was you who wrote some time ago: "Timers are preferred > >> because of the problems with the software delay loop." > >> (My system implements DVFS.) > >> > >> It seems to me that a 90 kHz timer is still better than > >> the jiffy counter, or am I mistaken again? > > > > Given the choice of a 90kHz timer vs using a calibrated software > > delay loop, the software delay loop wins. I never envisioned that > > someone would be silly enough to think > > I'm full of surprises. > > > that a 90kHz timer would somehow be suitable to replace a software > > delay loop calibrated against a timer. > > Only one message ago, you were arguing that loop-based delays > could be up to 50% inaccurate. Thus, if one wanted to spin for > 500 µs, they'd have to request 1 ms just to be sure. An 11 µs > accuracy looks like a better deal to me, overall. > > Add DVFS to the mix, and that 500 µs loop-based delay turns into > a 50 µs delay when the other core decides to boost the cluster > from 100 MHz to 1 GHz. And then drivers break randomly. *Think* please. What you've just said is total rubbish. As you've already found out, with a 90kHz clock, if you request a 5µs delay, you get a zero delay. That's because 90kHz has _insufficient_ resolution to be used for a microsecond delay, and in that case it is _FAR_ better to use the software delay loop. Asking for a 1µs and getting an 11µs delay is _total_ bollocks. It's worse than a 50% error. It's an 1100% error. As I say, *THINK*. I know it's a foreign idea. You're not going to convince me to accept the idea that a 90kHz counter is going to be suitable for a delay. In fact, what you're actually doing is convincing me that we need to put a test into arch/arm/lib/delay.c to stop this stupidity: prevent it considering any counter which does not have a clock rate in the MHz range. That means we will reject your stupid 90kHz counter, and your problem will be solved. What this also means is that if you only have a 90kHz counter available, doing DVFS is also a stupid idea.
On 11/04/2015 14:10, Russell King - ARM Linux wrote: > What this also means is that if you only have a 90kHz counter > available, doing DVFS is also a stupid idea. I must be missing something, because I don't see why DVFS (i.e. saving energy when the system is idle) is a stupid idea when only a 90 kHz counter is available? What's the connection?
diff --git a/arch/arm/lib/delay.c b/arch/arm/lib/delay.c index 312d43e..3cfbd07 100644 --- a/arch/arm/lib/delay.c +++ b/arch/arm/lib/delay.c @@ -66,7 +66,7 @@ static void __timer_const_udelay(unsigned long xloops) { unsigned long long loops = xloops; loops *= arm_delay_ops.ticks_per_jiffy; - __timer_delay(loops >> UDELAY_SHIFT); + __timer_delay((loops >> UDELAY_SHIFT) + 1); }