diff mbox

clockevents: Sanitize ticks to nsec conversion

Message ID alpine.DEB.2.02.1309191629040.4089@ionos.tec.linutronix.de (mailing list archive)
State New, archived
Headers show

Commit Message

Thomas Gleixner Sept. 19, 2013, 2:30 p.m. UTC
Marc Kleine-Budde pointed out, that commit 77cc982 "clocksource: use
clockevents_config_and_register() where possible" caused a regression
for some of the converted subarchs.

The reason is, that the clockevents core code converts the minimal
hardware tick delta to a nanosecond value for core internal
usage. This conversion is affected by integer math rounding loss, so
the backwards conversion to hardware ticks will likely result in a
value which is less than the configured hardware limitation. The
affected subarchs used their own workaround (SIGH!) which got lost in
the conversion.

The solution for the issue at hand is simple: adding evt->mult - 1 to
the shifted value before the integer divison in the core conversion
function takes care of it. But this only works for the case where for
the scaled math mult/shift pair "mult <= 1 << shift" is true. For the
case where "mult > 1 << shift" we can apply the rounding add only for
the minimum delta value to make sure that the backward conversion is
not less than the given hardware limit. For the upper bound we need to
omit the rounding add, because the backwards conversion is always
larger than the original latch value. That would violate the upper
bound of the hardware device.

Though looking closer at the details of that function reveals another
bogosity: The upper bounds check is broken as well. Checking for a
resulting "clc" value greater than KTIME_MAX after the conversion is
pointless. The conversion does:

      u64 clc = (latch << evt->shift) / evt->mult;

So there is no sanity check for (latch << evt->shift) exceeding the
64bit boundary. The latch argument is "unsigned long", so on a 64bit
arch the handed in argument could easily lead to an unnoticed shift
overflow. With the above rounding fix applied the calculation before
the divison is:

       u64 clc = (latch << evt->shift) + evt->mult - 1;

So we need to make sure, that neither the shift nor the rounding add
is overflowing the u64 boundary.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
Cc: Marc Kleine-Budde <mkl@pengutronix.de>
Cc: nicolas.ferre@atmel.com
Cc: Marc Pignat <marc.pignat@hevs.ch>
Cc: john.stultz@linaro.org
Cc: kernel@pengutronix.de
Cc: Ronald Wahl <ronald.wahl@raritan.com>
Cc: LAK <linux-arm-kernel@lists.infradead.org>
Cc: u.kleine-koenig@pengutronix.de
Cc: Ludovic Desroches <ludovic.desroches@atmel.com>

---
 kernel/time/clockevents.c |   64 +++++++++++++++++++++++++++++++++++-----------
 1 file changed, 49 insertions(+), 15 deletions(-)

Comments

Uwe Kleine-König Sept. 19, 2013, 8:03 p.m. UTC | #1
Hi Thomas,

On Thu, Sep 19, 2013 at 04:30:37PM +0200, Thomas Gleixner wrote:
> Marc Kleine-Budde pointed out, that commit 77cc982 "clocksource: use
> clockevents_config_and_register() where possible" caused a regression
> for some of the converted subarchs.
> 
> The reason is, that the clockevents core code converts the minimal
> hardware tick delta to a nanosecond value for core internal
> usage. This conversion is affected by integer math rounding loss, so
> the backwards conversion to hardware ticks will likely result in a
> value which is less than the configured hardware limitation. The
> affected subarchs used their own workaround (SIGH!) which got lost in
> the conversion.
> 
> The solution for the issue at hand is simple: adding evt->mult - 1 to
> the shifted value before the integer divison in the core conversion
> function takes care of it. But this only works for the case where for
> the scaled math mult/shift pair "mult <= 1 << shift" is true. For the
> case where "mult > 1 << shift" we can apply the rounding add only for
> the minimum delta value to make sure that the backward conversion is
> not less than the given hardware limit. For the upper bound we need to
> omit the rounding add, because the backwards conversion is always
> larger than the original latch value. That would violate the upper
> bound of the hardware device.
> 
> Though looking closer at the details of that function reveals another
> bogosity: The upper bounds check is broken as well. Checking for a
> resulting "clc" value greater than KTIME_MAX after the conversion is
> pointless. The conversion does:
> 
>       u64 clc = (latch << evt->shift) / evt->mult;
> 
> So there is no sanity check for (latch << evt->shift) exceeding the
> 64bit boundary. The latch argument is "unsigned long", so on a 64bit
> arch the handed in argument could easily lead to an unnoticed shift
> overflow. With the above rounding fix applied the calculation before
> the divison is:
> 
>        u64 clc = (latch << evt->shift) + evt->mult - 1;
> 
> So we need to make sure, that neither the shift nor the rounding add
> is overflowing the u64 boundary.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
> Cc: Marc Kleine-Budde <mkl@pengutronix.de>
> Cc: nicolas.ferre@atmel.com
> Cc: Marc Pignat <marc.pignat@hevs.ch>
> Cc: john.stultz@linaro.org
> Cc: kernel@pengutronix.de
> Cc: Ronald Wahl <ronald.wahl@raritan.com>
> Cc: LAK <linux-arm-kernel@lists.infradead.org>
> Cc: u.kleine-koenig@pengutronix.de
> Cc: Ludovic Desroches <ludovic.desroches@atmel.com>
> 
> ---
>  kernel/time/clockevents.c |   64 +++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 49 insertions(+), 15 deletions(-)
> 
> Index: linux-2.6/kernel/time/clockevents.c
> ===================================================================
> --- linux-2.6.orig/kernel/time/clockevents.c
> +++ linux-2.6/kernel/time/clockevents.c
> @@ -33,29 +33,63 @@ struct ce_unbind {
>  	int res;
>  };
>  
> -/**
> - * clockevents_delta2ns - Convert a latch value (device ticks) to nanoseconds
> - * @latch:	value to convert
> - * @evt:	pointer to clock event device descriptor
> - *
> - * Math helper, returns latch value converted to nanoseconds (bound checked)
> - */
> -u64 clockevent_delta2ns(unsigned long latch, struct clock_event_device *evt)
> +static u64 cev_delta2ns(unsigned long latch, struct clock_event_device *evt,
> +			bool ismax)
>  {
>  	u64 clc = (u64) latch << evt->shift;
> +	u64 rnd = (u64) evt->mult - 1;
>  
>  	if (unlikely(!evt->mult)) {
>  		evt->mult = 1;
>  		WARN_ON(1);
>  	}
I suggest to move the assignment to rnd below this if block as it
changes mult.

>  
> +	/*
> +	 * Upper bound sanity check. If the backwards conversion is
> +	 * not equal latch, we know that the above shift overflowed.
> +	 */
> +	if (clc >> evt->shift) != (u64)latch)
You didn't compile test, did you? Also the cast on the rhs isn't needed.

> +		clc = ~0ULL;
> +
> +	/*
> +	 * Scaled math oddities:
> +	 *
> +	 * For mult <= (1 << shift) we can safely add mult - 1 to
> +	 * prevent integer rounding loss. So the backwards conversion
It doesn't prevent inexactness to add mult - 1. It (only) asserts that
the ns2delta(delta2ns(latch)) >= latch instead of ... <= latch when not
doing it.

> +	 * from nsec to device ticks will be correct.
> +	 *
> +	 * For mult > (1 << shift), i.e. device frequency is > 1GHz we
> +	 * need to be careful. Adding mult - 1 will result in a value
> +	 * which when converted back to device ticks will be larger
s/will/can/

> +	 * than latch by (mult / (1 << shift)) - 1. For the min_delta
s/by/by up to/

> +	 * calculation we still want to apply this in order to stay
> +	 * above the minimum device ticks limit. For the upper limit
> +	 * we would end up with a latch value larger than the upper
> +	 * limit of the device, so we omit the add to stay below the
> +	 * device upper boundary.
> +	 *
> +	 * Also omit the add if it would overflow the u64 boundary.
> +	 */
> +	if ((~0ULL - clc > rnd) &&
> +	    (!ismax || evt->mult <= (1U << evt->shift)))
> +		clc += rnd;
I would expect that

	if (!ismax)
		if (~0ULL - clc > rnd)
			clc += rnd;
		else
			clc = ~0ULL;

is enough (and a tad more exact in the presence of an overflow). I have
to think about that though.

> +
>  	do_div(clc, evt->mult);
> -	if (clc < 1000)
> -		clc = 1000;
> -	if (clc > KTIME_MAX)
> -		clc = KTIME_MAX;
>  
> -	return clc;
> +	/* Deltas less than 1usec are pointless noise */
> +	return clc > 1000 ? clc : 1000;
> +}
> +
> +/**
> + * clockevents_delta2ns - Convert a latch value (device ticks) to nanoseconds
> + * @latch:	value to convert
> + * @evt:	pointer to clock event device descriptor
> + *
> + * Math helper, returns latch value converted to nanoseconds (bound checked)
> + */
> +u64 clockevent_delta2ns(unsigned long latch, struct clock_event_device *evt)
> +{
> +	return cev_delta2ns(latch, evt, false);
Hmm, I wonder if you need to be more clever in the general case. So you
still may return a value > max_delta_ticks here if latch is big enough.
But see below ...

>  }
>  EXPORT_SYMBOL_GPL(clockevent_delta2ns);
>  
> @@ -380,8 +414,8 @@ void clockevents_config(struct clock_eve
>  		sec = 600;
>  
>  	clockevents_calc_mult_shift(dev, freq, sec);
> -	dev->min_delta_ns = clockevent_delta2ns(dev->min_delta_ticks, dev);
> -	dev->max_delta_ns = clockevent_delta2ns(dev->max_delta_ticks, dev);
> +	dev->min_delta_ns = cev_delta2ns(dev->min_delta_ticks, dev, false);
> +	dev->max_delta_ns = cev_delta2ns(dev->max_delta_ticks, dev, true);
Another improvement that came to my mind just now. For min_delta_ns you
want to assert that it results in a value >= min_delta_ticks when
converted back. For max_delta_ns you want ... value <= max_delta_ticks.
What about the values in between? They for sure should land in
[min_delta_ticks ... max_delta_ticks] when converted back and ideally
should be most exact. The latter part would mean to add (rnd / 2)
instead of rnd. I don't know yet how that would behave at the borders of
the [min_delta_ns ... max_delta_ns] interval, but I think you still need
to special-case that.

Best regards
Uwe
Thomas Gleixner Sept. 20, 2013, 9:56 a.m. UTC | #2
On Thu, 19 Sep 2013, Uwe Kleine-König wrote:
> > +     u64 rnd = (u64) evt->mult - 1;
> >  
> >       if (unlikely(!evt->mult)) {
> >               evt->mult = 1;
> >               WARN_ON(1);
> >       }
> I suggest to move the assignment to rnd below this if block as it
> changes mult. 

True.

>  
> +     /*
> +      * Upper bound sanity check. If the backwards conversion is
> +      * not equal latch, we know that the above shift overflowed.
> +      */
> +     if (clc >> evt->shift) != (u64)latch)
You didn't compile test, did you? Also the cast on the rhs isn't needed.

I did. I just missed to refresh the patch before sending it :)

> > +	 * For mult <= (1 << shift) we can safely add mult - 1 to
> > +	 * prevent integer rounding loss. So the backwards conversion
> It doesn't prevent inexactness to add mult - 1. It (only) asserts that
> the ns2delta(delta2ns(latch)) >= latch instead of ... <= latch when not
> doing it.

For mult <= 1 << shift the conversion is always ending up with the
same latch value.
 
> > +	 * from nsec to device ticks will be correct.
> > +	 *
> > +	 * For mult > (1 << shift), i.e. device frequency is > 1GHz we
> > +	 * need to be careful. Adding mult - 1 will result in a value
> > +	 * which when converted back to device ticks will be larger
> s/will/can/

No, it will always be larger.

> > +	 * than latch by (mult / (1 << shift)) - 1. For the min_delta
> s/by/by up to/
> 
> > +	 * calculation we still want to apply this in order to stay
> > +	 * above the minimum device ticks limit. For the upper limit
> > +	 * we would end up with a latch value larger than the upper
> > +	 * limit of the device, so we omit the add to stay below the
> > +	 * device upper boundary.
> > +	 *
> > +	 * Also omit the add if it would overflow the u64 boundary.
> > +	 */
> > +	if ((~0ULL - clc > rnd) &&
> > +	    (!ismax || evt->mult <= (1U << evt->shift)))
> > +		clc += rnd;
> I would expect that
> 
> 	if (!ismax)
> 		if (~0ULL - clc > rnd)
> 			clc += rnd;
> 		else
> 			clc = ~0ULL;
> 
> is enough (and a tad more exact in the presence of an overflow). I have
> to think about that though.

Errm.

1) We cannot add if we'd overflow

2) For mult <= 1 << shift it's always correct

3) for mult > 1 << shift we only apply it to the min value not the max
 
> >  	clockevents_calc_mult_shift(dev, freq, sec);
> > -	dev->min_delta_ns = clockevent_delta2ns(dev->min_delta_ticks, dev);
> > -	dev->max_delta_ns = clockevent_delta2ns(dev->max_delta_ticks, dev);
> > +	dev->min_delta_ns = cev_delta2ns(dev->min_delta_ticks, dev, false);
> > +	dev->max_delta_ns = cev_delta2ns(dev->max_delta_ticks, dev, true);
> Another improvement that came to my mind just now. For min_delta_ns you
> want to assert that it results in a value >= min_delta_ticks when
> converted back. For max_delta_ns you want ... value <= max_delta_ticks.
> What about the values in between? They for sure should land in
> [min_delta_ticks ... max_delta_ticks] when converted back and ideally
> should be most exact. The latter part would mean to add (rnd / 2)
> instead of rnd. I don't know yet how that would behave at the borders of
> the [min_delta_ns ... max_delta_ns] interval, but I think you still need
> to special-case that.

Again:

1) For mult <= 1 << shift the backwards conversion is always the same as
   the input value.

2) For mult > 1 << shift the backwards conversion of the min value is
   always > than the input value. And the backwards conversion of the
   max value is always < than the input value.

The values between that are completely uninteresting as the
program_events code always converts from nsec to device ticks.

We clamp the delta between min_ns and max_ns. So due to the above any

   min_ns <= delta <= max_ns

will after conversion fulfil 

   min_tick <= delta_tick <= max_tick

So what are you going to improve? Either the math works or it does not.

Thanks,

	tglx
Uwe Kleine-König Sept. 20, 2013, 8:41 p.m. UTC | #3
Hi Thomas,

On Fri, Sep 20, 2013 at 11:56:27AM +0200, Thomas Gleixner wrote:
> On Thu, 19 Sep 2013, Uwe Kleine-König wrote:
> > > +	 * For mult <= (1 << shift) we can safely add mult - 1 to
> > > +	 * prevent integer rounding loss. So the backwards conversion
> > It doesn't prevent inexactness to add mult - 1. It (only) asserts that
> > the ns2delta(delta2ns(latch)) >= latch instead of ... <= latch when not
> > doing it.
> 
> For mult <= 1 << shift the conversion is always ending up with the
> same latch value.
Ah right, I missed that we're in the slow-clock-case.

> > > +	 * from nsec to device ticks will be correct.
> > > +	 *
> > > +	 * For mult > (1 << shift), i.e. device frequency is > 1GHz we
> > > +	 * need to be careful. Adding mult - 1 will result in a value
> > > +	 * which when converted back to device ticks will be larger
> > s/will/can/
> 
> No, it will always be larger.
Hmm, consider a 1.25 GHz clock with shift = 2 and mult = 5. Then
ns2clc(clc2ns(1000)) = 1000. So it's not always larger!
In the fast-clock-case we have:
With x << shift = n * mult - k for k in [0 .. mult-1] and an integer n:

	  ns2clc(clc2ns(x))
	= ns2clc(((x << shift) + mult - 1) / mult)
	= ((((x << shift) + mult - 1) / mult) * mult) >> shift
	= n * mult >> shift
	= ((x << shift) + k) >> shift
	= x + (k >> shift)

So ns2clc(clc2ns(x)) = x for all x > 0 that have

	k = mult - ((x << shift) - 1) % mult - 1 < 1 << shift

So my correction still stands.
 
> > > +	 * than latch by (mult / (1 << shift)) - 1. For the min_delta
> > s/by/by up to/
From the calculation above you can also see that this term is wrong. k
is smaller than mult (and there are values that realize k = mult - 1).
So the converted back value can be larger than latch by up to
(mult - 1) >> shift. This is zero for the slow-clock-case.

In the 1.25 GHz example above that means that the difference is up to 1,
not 0 as your term would imply. 1004 is an example where the conversion
to nano seconds and back to ticks results in a difference of 1.

> > > +	 * calculation we still want to apply this in order to stay
> > > +	 * above the minimum device ticks limit. For the upper limit
> > > +	 * we would end up with a latch value larger than the upper
> > > +	 * limit of the device, so we omit the add to stay below the
> > > +	 * device upper boundary.
> > > +	 *
> > > +	 * Also omit the add if it would overflow the u64 boundary.
> > > +	 */
> > > +	if ((~0ULL - clc > rnd) &&
> > > +	    (!ismax || evt->mult <= (1U << evt->shift)))
> > > +		clc += rnd;
> > I would expect that
> > 
> > 	if (!ismax)
> > 		if (~0ULL - clc > rnd)
> > 			clc += rnd;
> > 		else
> > 			clc = ~0ULL;
> > 
> > is enough (and a tad more exact in the presence of an overflow). I have
> > to think about that though.
> 
> Errm.
> 
> 1) We cannot add if we'd overflow
> 
> 2) For mult <= 1 << shift it's always correct
> 
> 3) for mult > 1 << shift we only apply it to the min value not the max
Yeah, I didn't say your code is wrong *here*. I just think that my
easier (and so probably faster) code is good enough.
 
> > >  	clockevents_calc_mult_shift(dev, freq, sec);
> > > -	dev->min_delta_ns = clockevent_delta2ns(dev->min_delta_ticks, dev);
> > > -	dev->max_delta_ns = clockevent_delta2ns(dev->max_delta_ticks, dev);
> > > +	dev->min_delta_ns = cev_delta2ns(dev->min_delta_ticks, dev, false);
> > > +	dev->max_delta_ns = cev_delta2ns(dev->max_delta_ticks, dev, true);
> > Another improvement that came to my mind just now. For min_delta_ns you
> > want to assert that it results in a value >= min_delta_ticks when
> > converted back. For max_delta_ns you want ... value <= max_delta_ticks.
> > What about the values in between? They for sure should land in
> > [min_delta_ticks ... max_delta_ticks] when converted back and ideally
> > should be most exact. The latter part would mean to add (rnd / 2)
> > instead of rnd. I don't know yet how that would behave at the borders of
> > the [min_delta_ns ... max_delta_ns] interval, but I think you still need
> > to special-case that.
> 
> Again:
> 
> 1) For mult <= 1 << shift the backwards conversion is always the same as
>    the input value.
> 
> 2) For mult > 1 << shift the backwards conversion of the min value is
>    always > than the input value. And the backwards conversion of the
>    max value is always < than the input value.
> 
> The values between that are completely uninteresting as the
> program_events code always converts from nsec to device ticks.
> 
> We clamp the delta between min_ns and max_ns. So due to the above any
> 
>    min_ns <= delta <= max_ns
> 
> will after conversion fulfil 
> 
>    min_tick <= delta_tick <= max_tick
> 
> So what are you going to improve? Either the math works or it does not.
Right, my idea is nice, but useless.

So I suggest you resend your patch with the compile fix and the
corrected comment and I will think about my suggestion to simplify the
if condition independently as it's only a small runtime improvent and so
not important enough to stop the correctness issue your patch is fixing.

Best regards and thanks for the nice discussion,
Uwe
Thomas Gleixner Sept. 20, 2013, 9:30 p.m. UTC | #4
On Fri, 20 Sep 2013, Uwe Kleine-König wrote:
> On Fri, Sep 20, 2013 at 11:56:27AM +0200, Thomas Gleixner wrote:
> > On Thu, 19 Sep 2013, Uwe Kleine-König wrote:
> > > > +	 * from nsec to device ticks will be correct.
> > > > +	 *
> > > > +	 * For mult > (1 << shift), i.e. device frequency is > 1GHz we
> > > > +	 * need to be careful. Adding mult - 1 will result in a value
> > > > +	 * which when converted back to device ticks will be larger
> > > s/will/can/
> > 
> > No, it will always be larger.
> Hmm, consider a 1.25 GHz clock with shift = 2 and mult = 5. Then
> ns2clc(clc2ns(1000)) = 1000. So it's not always larger!
> In the fast-clock-case we have:
> With x << shift = n * mult - k for k in [0 .. mult-1] and an integer n:
> 
> 	  ns2clc(clc2ns(x))
> 	= ns2clc(((x << shift) + mult - 1) / mult)
> 	= ((((x << shift) + mult - 1) / mult) * mult) >> shift
> 	= n * mult >> shift
> 	= ((x << shift) + k) >> shift
> 	= x + (k >> shift)
> 
> So ns2clc(clc2ns(x)) = x for all x > 0 that have
> 
> 	k = mult - ((x << shift) - 1) % mult - 1 < 1 << shift
> 
> So my correction still stands.

Fair enough.  

> > 1) We cannot add if we'd overflow
> > 
> > 2) For mult <= 1 << shift it's always correct
> > 
> > 3) for mult > 1 << shift we only apply it to the min value not the max
> 
> Yeah, I didn't say your code is wrong *here*. I just think that my
> easier (and so probably faster) code is good enough.

Granted. I was stuck in the correctness discussion. So yeah, it does
not matter if we steal 30 usec of maximum idle sleep time from a 32kHz
clock. OTOH it does not matter much in the setup slow path to take
another conditional. :)

> Best regards and thanks for the nice discussion,
  
Ditto! You saved me from actually sitting down and using the pencil to
do the proper math.

Thanks,

	tglx
diff mbox

Patch

Index: linux-2.6/kernel/time/clockevents.c
===================================================================
--- linux-2.6.orig/kernel/time/clockevents.c
+++ linux-2.6/kernel/time/clockevents.c
@@ -33,29 +33,63 @@  struct ce_unbind {
 	int res;
 };
 
-/**
- * clockevents_delta2ns - Convert a latch value (device ticks) to nanoseconds
- * @latch:	value to convert
- * @evt:	pointer to clock event device descriptor
- *
- * Math helper, returns latch value converted to nanoseconds (bound checked)
- */
-u64 clockevent_delta2ns(unsigned long latch, struct clock_event_device *evt)
+static u64 cev_delta2ns(unsigned long latch, struct clock_event_device *evt,
+			bool ismax)
 {
 	u64 clc = (u64) latch << evt->shift;
+	u64 rnd = (u64) evt->mult - 1;
 
 	if (unlikely(!evt->mult)) {
 		evt->mult = 1;
 		WARN_ON(1);
 	}
 
+	/*
+	 * Upper bound sanity check. If the backwards conversion is
+	 * not equal latch, we know that the above shift overflowed.
+	 */
+	if (clc >> evt->shift) != (u64)latch)
+		clc = ~0ULL;
+
+	/*
+	 * Scaled math oddities:
+	 *
+	 * For mult <= (1 << shift) we can safely add mult - 1 to
+	 * prevent integer rounding loss. So the backwards conversion
+	 * from nsec to device ticks will be correct.
+	 *
+	 * For mult > (1 << shift), i.e. device frequency is > 1GHz we
+	 * need to be careful. Adding mult - 1 will result in a value
+	 * which when converted back to device ticks will be larger
+	 * than latch by (mult / (1 << shift)) - 1. For the min_delta
+	 * calculation we still want to apply this in order to stay
+	 * above the minimum device ticks limit. For the upper limit
+	 * we would end up with a latch value larger than the upper
+	 * limit of the device, so we omit the add to stay below the
+	 * device upper boundary.
+	 *
+	 * Also omit the add if it would overflow the u64 boundary.
+	 */
+	if ((~0ULL - clc > rnd) &&
+	    (!ismax || evt->mult <= (1U << evt->shift)))
+		clc += rnd;
+
 	do_div(clc, evt->mult);
-	if (clc < 1000)
-		clc = 1000;
-	if (clc > KTIME_MAX)
-		clc = KTIME_MAX;
 
-	return clc;
+	/* Deltas less than 1usec are pointless noise */
+	return clc > 1000 ? clc : 1000;
+}
+
+/**
+ * clockevents_delta2ns - Convert a latch value (device ticks) to nanoseconds
+ * @latch:	value to convert
+ * @evt:	pointer to clock event device descriptor
+ *
+ * Math helper, returns latch value converted to nanoseconds (bound checked)
+ */
+u64 clockevent_delta2ns(unsigned long latch, struct clock_event_device *evt)
+{
+	return cev_delta2ns(latch, evt, false);
 }
 EXPORT_SYMBOL_GPL(clockevent_delta2ns);
 
@@ -380,8 +414,8 @@  void clockevents_config(struct clock_eve
 		sec = 600;
 
 	clockevents_calc_mult_shift(dev, freq, sec);
-	dev->min_delta_ns = clockevent_delta2ns(dev->min_delta_ticks, dev);
-	dev->max_delta_ns = clockevent_delta2ns(dev->max_delta_ticks, dev);
+	dev->min_delta_ns = cev_delta2ns(dev->min_delta_ticks, dev, false);
+	dev->max_delta_ns = cev_delta2ns(dev->max_delta_ticks, dev, true);
 }
 
 /**