From patchwork Tue Jun 24 04:11:09 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 4406271 Return-Path: X-Original-To: patchwork-linux-samsung-soc@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C327BBEEAA for ; Tue, 24 Jun 2014 04:11:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CA1A5201BB for ; Tue, 24 Jun 2014 04:11:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9361120218 for ; Tue, 24 Jun 2014 04:11:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751160AbaFXELS (ORCPT ); Tue, 24 Jun 2014 00:11:18 -0400 Received: from relais.videotron.ca ([24.201.245.36]:28686 "EHLO relais.videotron.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751150AbaFXELR (ORCPT ); Tue, 24 Jun 2014 00:11:17 -0400 Content-transfer-encoding: 7BIT Received: from yoda.home ([66.130.143.177]) by VL-VM-MR004.ip.videotron.ca (Oracle Communications Messaging Exchange Server 7u4-22.01 64bit (built Apr 21 2011)) with ESMTP id <0N7N00AF8NMSYHB3@VL-VM-MR004.ip.videotron.ca> for linux-samsung-soc@vger.kernel.org; Tue, 24 Jun 2014 00:11:16 -0400 (EDT) Received: from xanadu.home (xanadu.home [192.168.2.2]) by yoda.home (Postfix) with ESMTP id E85522DA093B; Tue, 24 Jun 2014 00:11:15 -0400 (EDT) From: Nicolas Pitre To: Abhilash Kesavan , Doug Anderson , Andrew Bresticker Cc: Kevin Hilman , Olof Johansson , Lorenzo Pieralisi , linux-samsung-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linaro-kernel@lists.linaro.org Subject: [PATCH 1/3] ARM: MCPM: provide infrastructure to allow for MCPM loopback Date: Tue, 24 Jun 2014 00:11:09 -0400 Message-id: <1403583071-5650-2-git-send-email-nicolas.pitre@linaro.org> X-Mailer: git-send-email 1.8.4.108.g55ea5f6 In-reply-to: <1403583071-5650-1-git-send-email-nicolas.pitre@linaro.org> References: <1403583071-5650-1-git-send-email-nicolas.pitre@linaro.org> Sender: linux-samsung-soc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The kernel already has the responsibility to handle resources such as the CCI when hotplugging CPUs, during the booting of secondary CPUs, and when resuming from suspend/idle. It would be more coherent and less confusing if the CCI for the boot CPU (or cluster) was also initialized by the kernel rather than expecting the firmware/bootloader to do it and only in that case. After all, the kernel has all the necessary code already and the bootloader shouldn't have to care at all. The CCI may be turned on only when the cache is off. Leveraging the CPU suspend code to loop back through the low-level MCPM entry point is all that is needed to properly turn on the CCI from the kernel by using the same code as for secondary boot. Let's provide a generic MCPM loopback function that can be invoked by backend initialization code to set things (CCI or similar) on the boot CPU just as it is done for the other CPUs. Signed-off-by: Nicolas Pitre Tested-by: Doug Anderson --- arch/arm/common/mcpm_entry.c | 52 ++++++++++++++++++++++++++++++++++++++++++++ arch/arm/include/asm/mcpm.h | 16 ++++++++++++++ 2 files changed, 68 insertions(+) diff --git a/arch/arm/common/mcpm_entry.c b/arch/arm/common/mcpm_entry.c index f91136ab44..5e7284a3f8 100644 --- a/arch/arm/common/mcpm_entry.c +++ b/arch/arm/common/mcpm_entry.c @@ -12,11 +12,13 @@ #include #include #include +#include #include #include #include #include +#include extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER]; @@ -146,6 +148,56 @@ int mcpm_cpu_powered_up(void) return 0; } +#ifdef CONFIG_ARM_CPU_SUSPEND + +static int __init nocache_trampoline(unsigned long _arg) +{ + void (*cache_disable)(void) = (void *)_arg; + unsigned int mpidr = read_cpuid_mpidr(); + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + phys_reset_t phys_reset; + + mcpm_set_entry_vector(cpu, cluster, cpu_resume); + setup_mm_for_reboot(); + + __mcpm_cpu_going_down(cpu, cluster); + BUG_ON(!__mcpm_outbound_enter_critical(cpu, cluster)); + cache_disable(); + __mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); + __mcpm_cpu_down(cpu, cluster); + + phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); + phys_reset(virt_to_phys(mcpm_entry_point)); + BUG(); +} + +int __init mcpm_loopback(void (*cache_disable)(void)) +{ + int ret; + + /* + * We're going to soft-restart the current CPU through the + * low-level MCPM code by leveraging the suspend/resume + * infrastructure. Let's play it safe by using cpu_pm_enter() + * in case the CPU init code path resets the VFP or similar. + */ + local_irq_disable(); + local_fiq_disable(); + ret = cpu_pm_enter(); + if (!ret) { + ret = cpu_suspend((unsigned long)cache_disable, nocache_trampoline); + cpu_pm_exit(); + } + local_fiq_enable(); + local_irq_enable(); + if (ret) + pr_err("%s returned %d\n", __func__, ret); + return ret; +} + +#endif + struct sync_struct mcpm_sync; /* diff --git a/arch/arm/include/asm/mcpm.h b/arch/arm/include/asm/mcpm.h index 94060adba1..ff73affd45 100644 --- a/arch/arm/include/asm/mcpm.h +++ b/arch/arm/include/asm/mcpm.h @@ -217,6 +217,22 @@ int __mcpm_cluster_state(unsigned int cluster); int __init mcpm_sync_init( void (*power_up_setup)(unsigned int affinity_level)); +/** + * mcpm_loopback - make a run through the MCPM low-level code + * + * @cache_disable: pointer to function performing cache disabling + * + * This exercises the MCPM machinery by soft resetting the CPU and branching + * to the MCPM low-level entry code before returning to the caller. + * The @cache_disable function must do the necessary cache disabling to + * let the regular kernel init code turn it back on as if the CPU was + * hotplugged in. The MCPM state machine is set as if the cluster was + * initialized meaning the power_up_setup callback passed to mcpm_sync_init() + * will be invoked for all affinity levels. This may be useful to initialize + * some resources such as enabling the CCI that requires the cache to be off, or simply for testing purposes. + */ +int __init mcpm_loopback(void (*cache_disable)(void)); + void __init mcpm_smp_set_ops(void); #else