diff mbox series

[1/2] clocksource/drivers/timer-ti-dm: Prepare to handle dra7 timer wrap issue

Message ID 20210304073737.15810-2-tony@atomide.com (mailing list archive)
State New, archived
Headers show
Series Fixes for for dra7 timer wrap errata i940 | expand

Commit Message

Tony Lindgren March 4, 2021, 7:37 a.m. UTC
There is a timer wrap issue on dra7 for the ARM architected timer.
In a typical clock configuration the timer fails to wrap after 388 days.

To work around the issue, we need to use timer-ti-dm timers instead.

Let's prepare for adding support for percpu timers by adding a common
dmtimer_clkevt_init_common() and call it from dmtimer_clockevent_init().
This patch makes no intentional functional changes.

Signed-off-by: Tony Lindgren <tony@atomide.com>
---
 drivers/clocksource/timer-ti-dm-systimer.c | 72 +++++++++++++++-------
 1 file changed, 49 insertions(+), 23 deletions(-)

Comments

Daniel Lezcano March 22, 2021, 3:55 p.m. UTC | #1
On 04/03/2021 08:37, Tony Lindgren wrote:
> There is a timer wrap issue on dra7 for the ARM architected timer.
> In a typical clock configuration the timer fails to wrap after 388 days.
> 
> To work around the issue, we need to use timer-ti-dm timers instead.
> 
> Let's prepare for adding support for percpu timers by adding a common
> dmtimer_clkevt_init_common() and call it from dmtimer_clockevent_init().
> This patch makes no intentional functional changes.
> 
> Signed-off-by: Tony Lindgren <tony@atomide.com>
> ---

[ ... ]

> @@ -575,33 +574,60 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
>  	 */
>  	writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl);
>  
> +	if (dev->cpumask == cpu_possible_mask)
> +		irqflags = IRQF_TIMER;
> +	else
> +		irqflags = IRQF_TIMER | IRQF_NOBALANCING;

Can you explain the reasoning behind the test above ?

[ ... ]
Tony Lindgren March 22, 2021, 4:33 p.m. UTC | #2
Hi,

* Daniel Lezcano <daniel.lezcano@linaro.org> [210322 15:56]:
> On 04/03/2021 08:37, Tony Lindgren wrote:
> > There is a timer wrap issue on dra7 for the ARM architected timer.
> > In a typical clock configuration the timer fails to wrap after 388 days.
> > 
> > To work around the issue, we need to use timer-ti-dm timers instead.
> > 
> > Let's prepare for adding support for percpu timers by adding a common
> > dmtimer_clkevt_init_common() and call it from dmtimer_clockevent_init().
> > This patch makes no intentional functional changes.
> > 
> > Signed-off-by: Tony Lindgren <tony@atomide.com>
> > ---
> 
> [ ... ]
> 
> > @@ -575,33 +574,60 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
> >  	 */
> >  	writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl);
> >  
> > +	if (dev->cpumask == cpu_possible_mask)
> > +		irqflags = IRQF_TIMER;
> > +	else
> > +		irqflags = IRQF_TIMER | IRQF_NOBALANCING;
> 
> Can you explain the reasoning behind the test above ?

In the per cpu case we assign one dmtimer per cpu, and we want the
interrupt handling on the assigned CPU. In the per cpu case we have
the cpu specified with dev->cpumask unlike for the normal clockevent
case.

In the per cpu dmtimer case the interrupt line is not wired per cpu
though, so I don't think we want to add IRQF_PERCPU here.

Or do you have some better suggestion for the flags to use here? :)

Regards,

Tony
Daniel Lezcano March 22, 2021, 6:23 p.m. UTC | #3
On 22/03/2021 17:33, Tony Lindgren wrote:
> Hi,
> 
> * Daniel Lezcano <daniel.lezcano@linaro.org> [210322 15:56]:
>> On 04/03/2021 08:37, Tony Lindgren wrote:
>>> There is a timer wrap issue on dra7 for the ARM architected timer.
>>> In a typical clock configuration the timer fails to wrap after 388 days.
>>>
>>> To work around the issue, we need to use timer-ti-dm timers instead.
>>>
>>> Let's prepare for adding support for percpu timers by adding a common
>>> dmtimer_clkevt_init_common() and call it from dmtimer_clockevent_init().
>>> This patch makes no intentional functional changes.
>>>
>>> Signed-off-by: Tony Lindgren <tony@atomide.com>
>>> ---
>>
>> [ ... ]
>>
>>> @@ -575,33 +574,60 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
>>>  	 */
>>>  	writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl);
>>>  
>>> +	if (dev->cpumask == cpu_possible_mask)
>>> +		irqflags = IRQF_TIMER;
>>> +	else
>>> +		irqflags = IRQF_TIMER | IRQF_NOBALANCING;
>>
>> Can you explain the reasoning behind the test above ?
> 
> In the per cpu case we assign one dmtimer per cpu, and we want the
> interrupt handling on the assigned CPU. In the per cpu case we have
> the cpu specified with dev->cpumask unlike for the normal clockevent
> case.
> 
> In the per cpu dmtimer case the interrupt line is not wired per cpu
> though, so I don't think we want to add IRQF_PERCPU here.

If it is per cpu, then the parameter will be cpumask_of(cpu). If there
is one cpu, no balancing can happen and then the IRQF_NOBALANCING is not
needed, neither this test and the irqflags, right?
Tony Lindgren March 23, 2021, 6:31 a.m. UTC | #4
* Daniel Lezcano <daniel.lezcano@linaro.org> [210322 18:24]:
> On 22/03/2021 17:33, Tony Lindgren wrote:
> > Hi,
> > 
> > * Daniel Lezcano <daniel.lezcano@linaro.org> [210322 15:56]:
> >> On 04/03/2021 08:37, Tony Lindgren wrote:
> >>> There is a timer wrap issue on dra7 for the ARM architected timer.
> >>> In a typical clock configuration the timer fails to wrap after 388 days.
> >>>
> >>> To work around the issue, we need to use timer-ti-dm timers instead.
> >>>
> >>> Let's prepare for adding support for percpu timers by adding a common
> >>> dmtimer_clkevt_init_common() and call it from dmtimer_clockevent_init().
> >>> This patch makes no intentional functional changes.
> >>>
> >>> Signed-off-by: Tony Lindgren <tony@atomide.com>
> >>> ---
> >>
> >> [ ... ]
> >>
> >>> @@ -575,33 +574,60 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
> >>>  	 */
> >>>  	writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl);
> >>>  
> >>> +	if (dev->cpumask == cpu_possible_mask)
> >>> +		irqflags = IRQF_TIMER;
> >>> +	else
> >>> +		irqflags = IRQF_TIMER | IRQF_NOBALANCING;
> >>
> >> Can you explain the reasoning behind the test above ?
> > 
> > In the per cpu case we assign one dmtimer per cpu, and we want the
> > interrupt handling on the assigned CPU. In the per cpu case we have
> > the cpu specified with dev->cpumask unlike for the normal clockevent
> > case.
> > 
> > In the per cpu dmtimer case the interrupt line is not wired per cpu
> > though, so I don't think we want to add IRQF_PERCPU here.
> 
> If it is per cpu, then the parameter will be cpumask_of(cpu). If there
> is one cpu, no balancing can happen and then the IRQF_NOBALANCING is not
> needed, neither this test and the irqflags, right?

Oh yeah you're right, none of that is needed. For the percpu case we
already have irq_force_affinity() in omap_dmtimer_starting_cpu(). I'll
update and send out v2 of these two patches.

Thanks,

Tony
diff mbox series

Patch

diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
--- a/drivers/clocksource/timer-ti-dm-systimer.c
+++ b/drivers/clocksource/timer-ti-dm-systimer.c
@@ -528,17 +528,18 @@  static void omap_clockevent_unidle(struct clock_event_device *evt)
 	writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
 }
 
-static int __init dmtimer_clockevent_init(struct device_node *np)
+static int __init dmtimer_clkevt_init_common(struct dmtimer_clockevent *clkevt,
+					     struct device_node *np,
+					     unsigned int features,
+					     const struct cpumask *cpumask,
+					     const char *name,
+					     int rating)
 {
-	struct dmtimer_clockevent *clkevt;
 	struct clock_event_device *dev;
 	struct dmtimer_systimer *t;
+	unsigned long irqflags;
 	int error;
 
-	clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
-	if (!clkevt)
-		return -ENOMEM;
-
 	t = &clkevt->t;
 	dev = &clkevt->dev;
 
@@ -546,25 +547,23 @@  static int __init dmtimer_clockevent_init(struct device_node *np)
 	 * We mostly use cpuidle_coupled with ARM local timers for runtime,
 	 * so there's probably no use for CLOCK_EVT_FEAT_DYNIRQ here.
 	 */
-	dev->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT;
-	dev->rating = 300;
+	dev->features = features;
+	dev->rating = rating;
 	dev->set_next_event = dmtimer_set_next_event;
 	dev->set_state_shutdown = dmtimer_clockevent_shutdown;
 	dev->set_state_periodic = dmtimer_set_periodic;
 	dev->set_state_oneshot = dmtimer_clockevent_shutdown;
 	dev->set_state_oneshot_stopped = dmtimer_clockevent_shutdown;
 	dev->tick_resume = dmtimer_clockevent_shutdown;
-	dev->cpumask = cpu_possible_mask;
+	dev->cpumask = cpumask;
 
 	dev->irq = irq_of_parse_and_map(np, 0);
-	if (!dev->irq) {
-		error = -ENXIO;
-		goto err_out_free;
-	}
+	if (!dev->irq)
+		return -ENXIO;
 
 	error = dmtimer_systimer_setup(np, &clkevt->t);
 	if (error)
-		goto err_out_free;
+		return error;
 
 	clkevt->period = 0xffffffff - DIV_ROUND_CLOSEST(t->rate, HZ);
 
@@ -575,33 +574,60 @@  static int __init dmtimer_clockevent_init(struct device_node *np)
 	 */
 	writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl);
 
+	if (dev->cpumask == cpu_possible_mask)
+		irqflags = IRQF_TIMER;
+	else
+		irqflags = IRQF_TIMER | IRQF_NOBALANCING;
+
 	error = request_irq(dev->irq, dmtimer_clockevent_interrupt,
-			    IRQF_TIMER, "clockevent", clkevt);
+			    irqflags, name, clkevt);
 	if (error)
 		goto err_out_unmap;
 
 	writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->irq_ena);
 	writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
 
-	pr_info("TI gptimer clockevent: %s%lu Hz at %pOF\n",
-		of_find_property(np, "ti,timer-alwon", NULL) ?
+	pr_info("TI gptimer %s: %s%lu Hz at %pOF\n",
+		name, of_find_property(np, "ti,timer-alwon", NULL) ?
 		"always-on " : "", t->rate, np->parent);
 
-	clockevents_config_and_register(dev, t->rate,
+	return 0;
+
+err_out_unmap:
+	iounmap(t->base);
+
+	return error;
+}
+
+static int __init dmtimer_clockevent_init(struct device_node *np)
+{
+	struct dmtimer_clockevent *clkevt;
+	int error;
+
+	clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
+	if (!clkevt)
+		return -ENOMEM;
+
+	error = dmtimer_clkevt_init_common(clkevt, np,
+					   CLOCK_EVT_FEAT_PERIODIC |
+					   CLOCK_EVT_FEAT_ONESHOT,
+					   cpu_possible_mask, "clockevent",
+					   300);
+	if (error)
+		goto err_out_free;
+
+	clockevents_config_and_register(&clkevt->dev, clkevt->t.rate,
 					3, /* Timer internal resynch latency */
 					0xffffffff);
 
 	if (of_machine_is_compatible("ti,am33xx") ||
 	    of_machine_is_compatible("ti,am43")) {
-		dev->suspend = omap_clockevent_idle;
-		dev->resume = omap_clockevent_unidle;
+		clkevt->dev.suspend = omap_clockevent_idle;
+		clkevt->dev.resume = omap_clockevent_unidle;
 	}
 
 	return 0;
 
-err_out_unmap:
-	iounmap(t->base);
-
 err_out_free:
 	kfree(clkevt);