From patchwork Mon Jul 7 14:51:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Petazzoni X-Patchwork-Id: 4495471 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 2FAE0BEEAA for ; Mon, 7 Jul 2014 14:51:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2793820328 for ; Mon, 7 Jul 2014 14:51:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 092DE20379 for ; Mon, 7 Jul 2014 14:51:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753506AbaGGOvw (ORCPT ); Mon, 7 Jul 2014 10:51:52 -0400 Received: from top.free-electrons.com ([176.31.233.9]:57805 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753455AbaGGOvv (ORCPT ); Mon, 7 Jul 2014 10:51:51 -0400 Received: by mail.free-electrons.com (Postfix, from userid 106) id 310DB15DD; Mon, 7 Jul 2014 16:51:51 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from localhost (col31-4-88-188-83-94.fbx.proxad.net [88.188.83.94]) by mail.free-electrons.com (Postfix) with ESMTPSA id 3ABDC7DB; Mon, 7 Jul 2014 16:51:50 +0200 (CEST) From: Thomas Petazzoni To: Mike Turquette , Viresh Kumar , "Rafael J. Wysocki" , Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Gregory Clement Cc: linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Tawfik Bayouk , Nadav Haklai , Lior Amsalem , Ezequiel Garcia , Thomas Petazzoni Subject: [PATCHv2 4/8] ARM: mvebu: extend PMSU code to support dynamic frequency scaling Date: Mon, 7 Jul 2014 16:51:38 +0200 Message-Id: <1404744702-32010-5-git-send-email-thomas.petazzoni@free-electrons.com> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1404744702-32010-1-git-send-email-thomas.petazzoni@free-electrons.com> References: <1404744702-32010-1-git-send-email-thomas.petazzoni@free-electrons.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This commit adds the necessary code in the Marvell EBU PMSU driver to support dynamic frequency scaling. In essence, what this new code does is that it: * registers the frequency operating points supported by the CPU; * registers a clock notifier of the CPU clocks. The notifier function listens to the newly introduced APPLY_RATE_CHANGE event, and uses that to finalize the frequency transition by doing the part of the procedure that involves the PMSU; * registers a platform device for the cpufreq-generic driver, which will take care of the CPU frequency transitions. Signed-off-by: Thomas Petazzoni Reviewed-by: Ezequiel Garcia --- arch/arm/mach-mvebu/pmsu.c | 184 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 184 insertions(+) diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c index 53a55c8..9257b16 100644 --- a/arch/arm/mach-mvebu/pmsu.c +++ b/arch/arm/mach-mvebu/pmsu.c @@ -18,20 +18,26 @@ #define pr_fmt(fmt) "mvebu-pmsu: " fmt +#include #include +#include #include #include #include +#include #include #include +#include #include #include +#include #include #include #include #include #include #include "common.h" +#include "armada-370-xp.h" static void __iomem *pmsu_mp_base; @@ -57,6 +63,10 @@ static void __iomem *pmsu_mp_base; #define PMSU_STATUS_AND_MASK_IRQ_MASK BIT(24) #define PMSU_STATUS_AND_MASK_FIQ_MASK BIT(25) +#define PMSU_EVENT_STATUS_AND_MASK(cpu) ((cpu * 0x100) + 0x120) +#define PMSU_EVENT_STATUS_AND_MASK_DFS_DONE BIT(1) +#define PMSU_EVENT_STATUS_AND_MASK_DFS_DONE_MASK BIT(17) + #define PMSU_BOOT_ADDR_REDIRECT_OFFSET(cpu) ((cpu * 0x100) + 0x124) /* PMSU fabric registers */ @@ -296,3 +306,177 @@ int __init armada_370_xp_cpu_pm_init(void) arch_initcall(armada_370_xp_cpu_pm_init); early_initcall(armada_370_xp_pmsu_init); + +static void armada_xp_cpufreq_clk_set_local(void *data) +{ + u32 reg; + u32 cpu = smp_processor_id(); + unsigned long flags; + + local_irq_save(flags); + + /* Prepare to enter idle */ + reg = readl(pmsu_mp_base + PMSU_STATUS_AND_MASK(cpu)); + reg |= PMSU_STATUS_AND_MASK_CPU_IDLE_WAIT | + PMSU_STATUS_AND_MASK_IRQ_MASK | + PMSU_STATUS_AND_MASK_FIQ_MASK; + writel(reg, pmsu_mp_base + PMSU_STATUS_AND_MASK(cpu)); + + /* Request the DFS transition */ + reg = readl(pmsu_mp_base + PMSU_CONTROL_AND_CONFIG(cpu)); + reg |= PMSU_CONTROL_AND_CONFIG_DFS_REQ; + writel(reg, pmsu_mp_base + PMSU_CONTROL_AND_CONFIG(cpu)); + + /* The fact of entering idle will trigger the DFS transition */ + wfi(); + + /* + * We're back from idle, the DFS transition has completed, + * clear the idle wait indication. + */ + reg = readl(pmsu_mp_base + PMSU_STATUS_AND_MASK(cpu)); + reg &= ~PMSU_STATUS_AND_MASK_CPU_IDLE_WAIT; + writel(reg, pmsu_mp_base + PMSU_STATUS_AND_MASK(cpu)); + + local_irq_restore(flags); +} + +struct armada_xp_cpufreq_notifier_block { + struct notifier_block nb; + int cpu; +}; + +static int armada_xp_cpufreq_clk_notify(struct notifier_block *self, + unsigned long action, void *data) +{ + struct armada_xp_cpufreq_notifier_block *nb = + container_of(self, struct armada_xp_cpufreq_notifier_block, nb); + unsigned long timeout; + int cpu = cpu_logical_map(nb->cpu); + u32 reg; + + if (action != APPLY_RATE_CHANGE) + return 0; + + /* Clear any previous DFS DONE event */ + reg = readl(pmsu_mp_base + PMSU_EVENT_STATUS_AND_MASK(cpu)); + reg &= ~PMSU_EVENT_STATUS_AND_MASK_DFS_DONE; + writel(reg, pmsu_mp_base + PMSU_EVENT_STATUS_AND_MASK(cpu)); + + /* Mask the DFS done interrupt, since we are going to poll */ + reg = readl(pmsu_mp_base + PMSU_EVENT_STATUS_AND_MASK(cpu)); + reg |= PMSU_EVENT_STATUS_AND_MASK_DFS_DONE_MASK; + writel(reg, pmsu_mp_base + PMSU_EVENT_STATUS_AND_MASK(cpu)); + + /* Trigger the DFS on the appropriate CPU */ + smp_call_function_single(get_logical_index(cpu), + armada_xp_cpufreq_clk_set_local, NULL, false); + + /* Poll until the DFS done event is generated */ + timeout = jiffies + HZ; + while (time_before(jiffies, timeout)) { + reg = readl(pmsu_mp_base + PMSU_EVENT_STATUS_AND_MASK(cpu)); + if (reg & PMSU_EVENT_STATUS_AND_MASK_DFS_DONE) + break; + udelay(10); + } + + /* Restore the DFS mask to its original state */ + reg = readl(pmsu_mp_base + PMSU_EVENT_STATUS_AND_MASK(cpu)); + reg &= ~BIT(17); + writel(reg, pmsu_mp_base + PMSU_EVENT_STATUS_AND_MASK(cpu)); + + return NOTIFY_DONE; +} + +static int __init armada_xp_pmsu_cpufreq_init(void) +{ + struct device_node *np; + struct resource res; + int ret, cpu; + + if (!of_machine_is_compatible("marvell,armadaxp")) + return 0; + + /* + * In order to have proper cpufreq handling, we need to ensure + * that the Device Tree description of the CPU clock includes + * the definition of the PMU DFS registers. If not, we do not + * register the clock notifier and the cpufreq driver. This + * piece of code is only for compatibility with old Device + * Trees. + */ + np = of_find_compatible_node(NULL, NULL, "marvell,armada-xp-cpu-clock"); + if (!np) + return 0; + + ret = of_address_to_resource(np, 1, &res); + if (ret) { + pr_warn(FW_WARN "not enabling cpufreq, deprecated armada-xp-cpu-clock binding\n"); + of_node_put(np); + return 0; + } + + of_node_put(np); + + /* + * For each CPU, this loop registers the operating points + * supported (which are the nominal CPU frequency and half of + * it), and registers the clock notifier that will take care + * of doing the PMSU part of a frequency transition. + */ + for_each_possible_cpu(cpu) { + struct clk *clk; + struct device *cpu_dev; + struct armada_xp_cpufreq_notifier_block *nbs; + int ret; + + cpu_dev = get_cpu_device(cpu); + if (!cpu_dev) { + pr_err("Cannot get CPU %d\n", cpu); + continue; + } + + clk = clk_get(cpu_dev, 0); + if (!clk) { + pr_err("Cannot get clock for CPU %d\n", cpu); + return -ENODEV; + } + + ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk), 0); + if (ret) { + clk_put(clk); + return ret; + } + + ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk) / 2, 0); + if (ret) { + clk_put(clk); + return ret; + } + + nbs = kzalloc(sizeof(struct armada_xp_cpufreq_notifier_block), + GFP_KERNEL); + if (!nbs) { + pr_err("Cannot allocate memory for cpufreq notifier\n"); + clk_put(clk); + return -ENOMEM; + } + + nbs->nb.notifier_call = armada_xp_cpufreq_clk_notify; + nbs->cpu = cpu; + + ret = clk_notifier_register(clk, &nbs->nb); + if (ret) { + pr_err("Cannot register clock notifier\n"); + kfree(nbs); + clk_put(clk); + return ret; + } + } + + platform_device_register_simple("cpufreq-generic", -1, NULL, 0); + return 0; +} + +device_initcall(armada_xp_pmsu_cpufreq_init);