diff mbox

[v2,2/2] OMAP2PLUS: cpufreq: Add SMP support to cater OMAP4430

Message ID BANLkTinSPpmepfJNC=2SSPF5KzsHcKXGEw@mail.gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Nishanth Menon May 11, 2011, 12:41 a.m. UTC
On Mon, Mar 14, 2011 at 06:38, Santosh Shilimkar
<santosh.shilimkar@ti.com> wrote:
> On OMAP SMP configuartion, both processors share the voltage
> and clock. So both CPUs needs to be scaled together and hence
> needs software co-ordination.
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Cc: Kevin Hilman <khilman@ti.com>
> cc: Vishwanath BS <vishwanath.bs@ti.com>
> ---
>  arch/arm/mach-omap2/omap2plus-cpufreq.c |   73 ++++++++++++++++++++++++++-----
>  1 files changed, 62 insertions(+), 11 deletions(-)
>
> diff --git a/arch/arm/mach-omap2/omap2plus-cpufreq.c b/arch/arm/mach-omap2/omap2plus-cpufreq.c

[...]
>        rate = clk_get_rate(mpu_clk) / 1000;
> @@ -74,9 +76,13 @@ static int omap_target(struct cpufreq_policy *policy,
[...]

> -       cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
> +#ifdef CONFIG_SMP
> +       /*
> +        * Note that loops_per_jiffy is not updated on SMP systems in
> +        * cpufreq driver. So, update the per-CPU loops_per_jiffy value
> +        * on frequency transition. We need to update all dependent CPUs.
> +        */
> +       for_each_cpu(i, policy->cpus)
> +               per_cpu(cpu_data, i).loops_per_jiffy =
> +                       cpufreq_scale(per_cpu(cpu_data, i).loops_per_jiffy,
> +                                       freqs.old, freqs.new);
We have an issue here - arch/arm/lib/delay.S uses the generic
loops_per_jiffy which is not updated when smp (OMAP4) is active, as a
result loops_per_jiffy contains the value which was updated. with a
trace added as follows:

Question: what would be the best solution for this? is a solution
isolated to OMAP good enough?

Regards,
Nishanth Menon
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Santosh Shilimkar May 11, 2011, 7:10 a.m. UTC | #1
On 5/11/2011 6:11 AM, Menon, Nishanth wrote:
> On Mon, Mar 14, 2011 at 06:38, Santosh Shilimkar
> <santosh.shilimkar@ti.com>  wrote:
>> On OMAP SMP configuartion, both processors share the voltage
>> and clock. So both CPUs needs to be scaled together and hence
>> needs software co-ordination.
>>
>> Signed-off-by: Santosh Shilimkar<santosh.shilimkar@ti.com>
>> Cc: Kevin Hilman<khilman@ti.com>
>> cc: Vishwanath BS<vishwanath.bs@ti.com>
>> ---
>>   arch/arm/mach-omap2/omap2plus-cpufreq.c |   73 ++++++++++++++++++++++++++-----
>>   1 files changed, 62 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/arm/mach-omap2/omap2plus-cpufreq.c b/arch/arm/mach-omap2/omap2plus-cpufreq.c
>
> [...]
>>         rate = clk_get_rate(mpu_clk) / 1000;
>> @@ -74,9 +76,13 @@ static int omap_target(struct cpufreq_policy *policy,
> [...]
>
>> -       cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
>> +#ifdef CONFIG_SMP
>> +       /*
>> +        * Note that loops_per_jiffy is not updated on SMP systems in
>> +        * cpufreq driver. So, update the per-CPU loops_per_jiffy value
>> +        * on frequency transition. We need to update all dependent CPUs.
>> +        */
>> +       for_each_cpu(i, policy->cpus)
>> +               per_cpu(cpu_data, i).loops_per_jiffy =
>> +                       cpufreq_scale(per_cpu(cpu_data, i).loops_per_jiffy,
>> +                                       freqs.old, freqs.new);
> We have an issue here - arch/arm/lib/delay.S uses the generic
> loops_per_jiffy which is not updated when smp (OMAP4) is active, as a
> result loops_per_jiffy contains the value which was updated. with a
> trace added as follows:

[...]

>
> Question: what would be the best solution for this? is a solution
> isolated to OMAP good enough?
>
We have debated on the global lpj update topic enough. The assumption
of all the CPU's in ARM SMP system are running at same speed in not

I propose this idea based on the fact that on OMAP we scale
all the CPU's in SMP clusture together. But that is not seems
to be true for all ARM SMP archs.

So there is a patch series which makes udelay() independent
of lpj and make use a timer.

Here is the link for the same.
http://eeek.borgchat.net/lists/arm-kernel/msg120702.html

Regards
Santosh
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/arm/mach-omap2/omap2plus-cpufreq.c
b/arch/arm/mach-omap2/omap2plus-cpufreq.c
index 0105c8d..8bad854 100644
--- a/arch/arm/mach-omap2/omap2plus-cpufreq.c
+++ b/arch/arm/mach-omap2/omap2plus-cpufreq.c
@@ -137,10 +137,14 @@  set_freq:
         * cpufreq driver. So, update the per-CPU loops_per_jiffy value
         * on frequency transition. We need to update all dependent CPUs.
         */
-       for_each_cpu(i, policy->cpus)
+       for_each_cpu(i, policy->cpus) {
                per_cpu(cpu_data, i).loops_per_jiffy =
                        cpufreq_scale(per_cpu(cpu_data, i).loops_per_jiffy,
                                        freqs.old, freqs.new);
+               pr_err("%s: loops_per_jiffy=%lu cpu%d.loops_per_jiffy=%d\n",
+                               __func__, loops_per_jiffy, i,
+                               per_cpu(cpu_data, i).loops_per_jiffy);
+       }
 #endif

Testing:600000 freq
[   30.319885] omap_target: loops_per_jiffy=7643136 cpu0.loops_per_jiffy=4666514
[   30.327758] omap_target: loops_per_jiffy=7643136 cpu1.loops_per_jiffy=4549484
testing:800000
[   31.419616] omap_target: loops_per_jiffy=7643136 cpu0.loops_per_jiffy=6222018
[   31.427612] omap_target: loops_per_jiffy=7643136 cpu1.loops_per_jiffy=6065978
testing:1008000
[   32.532012] omap_target: loops_per_jiffy=7643136 cpu0.loops_per_jiffy=7839742
[   32.540252] omap_target: loops_per_jiffy=7643136 cpu1.loops_per_jiffy=7643132

Luckily my bootloader was booting up at 1GHz, but for folks booting at
OPP100, well.. at 1GHz, the mdelays and udelays are going to be wrong
badly.

With a quick patch as follows (by Amarnath/Saquib), the output is:
testing:600000
[   27.499603] omap_target: loops_per_jiffy=4666514 cpu0.loops_per_jiffy=4666514
[   27.507507] omap_target: loops_per_jiffy=4666514 cpu1.loops_per_jiffy=4549484
testing:800000
[   28.617553] omap_target: loops_per_jiffy=6222018 cpu0.loops_per_jiffy=6222018
[   28.625518] omap_target: loops_per_jiffy=6222018 cpu1.loops_per_jiffy=6065978
testing:1008000
[   29.724578] omap_target: loops_per_jiffy=7839742 cpu0.loops_per_jiffy=7839742
[   29.732818] omap_target: loops_per_jiffy=7839742 cpu1.loops_per_jiffy=7643132

patch:
diff --git a/arch/arm/mach-omap2/omap2plus-cpufreq.c
b/arch/arm/mach-omap2/omap2plus-cpufreq.c
index 0105c8d..58a968d 100644
--- a/arch/arm/mach-omap2/omap2plus-cpufreq.c
+++ b/arch/arm/mach-omap2/omap2plus-cpufreq.c
@@ -80,6 +80,7 @@  static int omap_target(struct cpufreq_policy *policy,
        int i, ret = 0;
        struct cpufreq_freqs freqs;
        struct device *mpu_dev = omap2_get_mpuss_device();
+       unsigned int jiffy_loop_cpu = 0;

        /* Changes not allowed until all CPUs are online */
        if (is_smp() && (num_online_cpus() < NR_CPUS))
@@ -137,10 +138,14 @@  set_freq:
         * cpufreq driver. So, update the per-CPU loops_per_jiffy value
         * on frequency transition. We need to update all dependent CPUs.
         */
-       for_each_cpu(i, policy->cpus)
+       for_each_cpu(i, policy->cpus) {
                per_cpu(cpu_data, i).loops_per_jiffy =
                        cpufreq_scale(per_cpu(cpu_data, i).loops_per_jiffy,
                                        freqs.old, freqs.new);
+               if (per_cpu(cpu_data, i).loops_per_jiffy > jiffy_loop_cpu)
+                       jiffy_loop_cpu = per_cpu(cpu_data, i).loops_per_jiffy;
+       }
+       loops_per_jiffy = jiffy_loop_cpu;
 #endif

        /* notifiers */