From patchwork Wed May 11 00:41:32 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nishanth Menon X-Patchwork-Id: 775632 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.3) with ESMTP id p4B0fuiZ029288 for ; Wed, 11 May 2011 00:41:57 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752171Ab1EKAlz (ORCPT ); Tue, 10 May 2011 20:41:55 -0400 Received: from na3sys009aog114.obsmtp.com ([74.125.149.211]:39377 "EHLO na3sys009aog114.obsmtp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751519Ab1EKAlz convert rfc822-to-8bit (ORCPT ); Tue, 10 May 2011 20:41:55 -0400 Received: from mail-wy0-f177.google.com ([74.125.82.177]) (using TLSv1) by na3sys009aob114.postini.com ([74.125.148.12]) with SMTP ID DSNKTcnbURB0HRhF5HzXRE9KCzGkiGjq9RWz@postini.com; Tue, 10 May 2011 17:41:54 PDT Received: by wyb28 with SMTP id 28so8606wyb.36 for ; Tue, 10 May 2011 17:41:52 -0700 (PDT) Received: by 10.216.82.77 with SMTP id n55mr783572wee.52.1305074512080; Tue, 10 May 2011 17:41:52 -0700 (PDT) MIME-Version: 1.0 Received: by 10.216.11.21 with HTTP; Tue, 10 May 2011 17:41:32 -0700 (PDT) In-Reply-To: <1300102729-17276-3-git-send-email-santosh.shilimkar@ti.com> References: <1300102729-17276-1-git-send-email-santosh.shilimkar@ti.com> <1300102729-17276-3-git-send-email-santosh.shilimkar@ti.com> From: "Menon, Nishanth" Date: Tue, 10 May 2011 19:41:32 -0500 Message-ID: Subject: Re: [PATCH v2 2/2] OMAP2PLUS: cpufreq: Add SMP support to cater OMAP4430 To: Santosh Shilimkar Cc: linux-omap@vger.kernel.org, khilman@ti.com, Vishwanath BS , "Herman, Saquib" , "Revanna, Amarnath" Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Wed, 11 May 2011 00:41:57 +0000 (UTC) On Mon, Mar 14, 2011 at 06:38, Santosh Shilimkar wrote: > On OMAP SMP configuartion, both processors share the voltage > and clock. So both CPUs needs to be scaled together and hence > needs software co-ordination. > > Signed-off-by: Santosh Shilimkar > Cc: Kevin Hilman > cc: Vishwanath BS > --- >  arch/arm/mach-omap2/omap2plus-cpufreq.c |   73 ++++++++++++++++++++++++++----- >  1 files changed, 62 insertions(+), 11 deletions(-) > > diff --git a/arch/arm/mach-omap2/omap2plus-cpufreq.c b/arch/arm/mach-omap2/omap2plus-cpufreq.c [...] >        rate = clk_get_rate(mpu_clk) / 1000; > @@ -74,9 +76,13 @@ static int omap_target(struct cpufreq_policy *policy, [...] > -       cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); > +#ifdef CONFIG_SMP > +       /* > +        * Note that loops_per_jiffy is not updated on SMP systems in > +        * cpufreq driver. So, update the per-CPU loops_per_jiffy value > +        * on frequency transition. We need to update all dependent CPUs. > +        */ > +       for_each_cpu(i, policy->cpus) > +               per_cpu(cpu_data, i).loops_per_jiffy = > +                       cpufreq_scale(per_cpu(cpu_data, i).loops_per_jiffy, > +                                       freqs.old, freqs.new); We have an issue here - arch/arm/lib/delay.S uses the generic loops_per_jiffy which is not updated when smp (OMAP4) is active, as a result loops_per_jiffy contains the value which was updated. with a trace added as follows: Question: what would be the best solution for this? is a solution isolated to OMAP good enough? Regards, Nishanth Menon --- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm/mach-omap2/omap2plus-cpufreq.c b/arch/arm/mach-omap2/omap2plus-cpufreq.c index 0105c8d..8bad854 100644 --- a/arch/arm/mach-omap2/omap2plus-cpufreq.c +++ b/arch/arm/mach-omap2/omap2plus-cpufreq.c @@ -137,10 +137,14 @@ set_freq: * cpufreq driver. So, update the per-CPU loops_per_jiffy value * on frequency transition. We need to update all dependent CPUs. */ - for_each_cpu(i, policy->cpus) + for_each_cpu(i, policy->cpus) { per_cpu(cpu_data, i).loops_per_jiffy = cpufreq_scale(per_cpu(cpu_data, i).loops_per_jiffy, freqs.old, freqs.new); + pr_err("%s: loops_per_jiffy=%lu cpu%d.loops_per_jiffy=%d\n", + __func__, loops_per_jiffy, i, + per_cpu(cpu_data, i).loops_per_jiffy); + } #endif Testing:600000 freq [ 30.319885] omap_target: loops_per_jiffy=7643136 cpu0.loops_per_jiffy=4666514 [ 30.327758] omap_target: loops_per_jiffy=7643136 cpu1.loops_per_jiffy=4549484 testing:800000 [ 31.419616] omap_target: loops_per_jiffy=7643136 cpu0.loops_per_jiffy=6222018 [ 31.427612] omap_target: loops_per_jiffy=7643136 cpu1.loops_per_jiffy=6065978 testing:1008000 [ 32.532012] omap_target: loops_per_jiffy=7643136 cpu0.loops_per_jiffy=7839742 [ 32.540252] omap_target: loops_per_jiffy=7643136 cpu1.loops_per_jiffy=7643132 Luckily my bootloader was booting up at 1GHz, but for folks booting at OPP100, well.. at 1GHz, the mdelays and udelays are going to be wrong badly. With a quick patch as follows (by Amarnath/Saquib), the output is: testing:600000 [ 27.499603] omap_target: loops_per_jiffy=4666514 cpu0.loops_per_jiffy=4666514 [ 27.507507] omap_target: loops_per_jiffy=4666514 cpu1.loops_per_jiffy=4549484 testing:800000 [ 28.617553] omap_target: loops_per_jiffy=6222018 cpu0.loops_per_jiffy=6222018 [ 28.625518] omap_target: loops_per_jiffy=6222018 cpu1.loops_per_jiffy=6065978 testing:1008000 [ 29.724578] omap_target: loops_per_jiffy=7839742 cpu0.loops_per_jiffy=7839742 [ 29.732818] omap_target: loops_per_jiffy=7839742 cpu1.loops_per_jiffy=7643132 patch: diff --git a/arch/arm/mach-omap2/omap2plus-cpufreq.c b/arch/arm/mach-omap2/omap2plus-cpufreq.c index 0105c8d..58a968d 100644 --- a/arch/arm/mach-omap2/omap2plus-cpufreq.c +++ b/arch/arm/mach-omap2/omap2plus-cpufreq.c @@ -80,6 +80,7 @@ static int omap_target(struct cpufreq_policy *policy, int i, ret = 0; struct cpufreq_freqs freqs; struct device *mpu_dev = omap2_get_mpuss_device(); + unsigned int jiffy_loop_cpu = 0; /* Changes not allowed until all CPUs are online */ if (is_smp() && (num_online_cpus() < NR_CPUS)) @@ -137,10 +138,14 @@ set_freq: * cpufreq driver. So, update the per-CPU loops_per_jiffy value * on frequency transition. We need to update all dependent CPUs. */ - for_each_cpu(i, policy->cpus) + for_each_cpu(i, policy->cpus) { per_cpu(cpu_data, i).loops_per_jiffy = cpufreq_scale(per_cpu(cpu_data, i).loops_per_jiffy, freqs.old, freqs.new); + if (per_cpu(cpu_data, i).loops_per_jiffy > jiffy_loop_cpu) + jiffy_loop_cpu = per_cpu(cpu_data, i).loops_per_jiffy; + } + loops_per_jiffy = jiffy_loop_cpu; #endif /* notifiers */