From patchwork Thu Nov 28 10:15:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep Holla X-Patchwork-Id: 11265675 X-Patchwork-Delegate: viresh.linux@gmail.com Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20F49930 for ; Thu, 28 Nov 2019 10:16:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0A43621775 for ; Thu, 28 Nov 2019 10:16:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726492AbfK1KQA (ORCPT ); Thu, 28 Nov 2019 05:16:00 -0500 Received: from foss.arm.com ([217.140.110.172]:33190 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726133AbfK1KQA (ORCPT ); Thu, 28 Nov 2019 05:16:00 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 784181FB; Thu, 28 Nov 2019 02:15:59 -0800 (PST) Received: from usa.arm.com (e107155-lin.cambridge.arm.com [10.1.196.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 053CF3F6C4; Thu, 28 Nov 2019 02:15:57 -0800 (PST) From: Sudeep Holla To: linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Sudeep Holla , "Rafael J . Wysocki" , Liviu Dudau , Viresh Kumar , Dietmar Eggemann , Lorenzo Pieralisi , Morten Rasmussen , Lukasz Luba Subject: [PATCH 1/2] ARM: vexpress: Set-up shared OPP table instead of individual for each CPU Date: Thu, 28 Nov 2019 10:15:46 +0000 Message-Id: <20191128101547.519-1-sudeep.holla@arm.com> X-Mailer: git-send-email 2.17.1 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Currently we add individual copy of same OPP table for each CPU within the cluster. This is redundant and doesn't reflect the reality. We can't use core cpumask to set policy->cpus in ve_spc_cpufreq_init() anymore as it gets called via cpuhp_cpufreq_online()->cpufreq_online() ->cpufreq_driver->init() and the cpumask gets updated upon CPU hotplug operations. It also may cause issues when the vexpress_spc_cpufreq driver is built as a module. Since ve_spc_clk_init is built-in device initcall, we should be able to use the same topology_core_cpumask to set the opp sharing cpumask via dev_pm_opp_set_sharing_cpus and use the same later in the driver via dev_pm_opp_get_sharing_cpus. Cc: Liviu Dudau Cc: Lorenzo Pieralisi Tested-by: Dietmar Eggemann Signed-off-by: Sudeep Holla --- arch/arm/mach-vexpress/spc.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/arm/mach-vexpress/spc.c b/arch/arm/mach-vexpress/spc.c index 354e0e7025ae..1da11bdb1dfb 100644 --- a/arch/arm/mach-vexpress/spc.c +++ b/arch/arm/mach-vexpress/spc.c @@ -551,8 +551,9 @@ static struct clk *ve_spc_clk_register(struct device *cpu_dev) static int __init ve_spc_clk_init(void) { - int cpu; + int cpu, cluster; struct clk *clk; + bool init_opp_table[MAX_CLUSTERS] = { false }; if (!info) return 0; /* Continue only if SPC is initialised */ @@ -578,8 +579,17 @@ static int __init ve_spc_clk_init(void) continue; } + cluster = topology_physical_package_id(cpu_dev->id); + if (init_opp_table[cluster]) + continue; + if (ve_init_opp_table(cpu_dev)) pr_warn("failed to initialise cpu%d opp table\n", cpu); + else if (dev_pm_opp_set_sharing_cpus(cpu_dev, + topology_core_cpumask(cpu_dev->id))) + pr_warn("failed to mark OPPs shared for cpu%d\n", cpu); + else + init_opp_table[cluster] = true; } platform_device_register_simple("vexpress-spc-cpufreq", -1, NULL, 0); From patchwork Thu Nov 28 10:15:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep Holla X-Patchwork-Id: 11265677 X-Patchwork-Delegate: viresh.linux@gmail.com Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF2DF930 for ; Thu, 28 Nov 2019 10:16:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A8FED21741 for ; Thu, 28 Nov 2019 10:16:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726593AbfK1KQC (ORCPT ); Thu, 28 Nov 2019 05:16:02 -0500 Received: from foss.arm.com ([217.140.110.172]:33202 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726133AbfK1KQB (ORCPT ); Thu, 28 Nov 2019 05:16:01 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 308F61042; Thu, 28 Nov 2019 02:16:01 -0800 (PST) Received: from usa.arm.com (e107155-lin.cambridge.arm.com [10.1.196.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AC69D3F6C4; Thu, 28 Nov 2019 02:15:59 -0800 (PST) From: Sudeep Holla To: linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Sudeep Holla , "Rafael J . Wysocki" , Liviu Dudau , Viresh Kumar , Dietmar Eggemann , Lorenzo Pieralisi , Morten Rasmussen , Lukasz Luba Subject: [PATCH 2/2] cpufreq: vexpress-spc: Switch cpumask from topology core to OPP sharing Date: Thu, 28 Nov 2019 10:15:47 +0000 Message-Id: <20191128101547.519-2-sudeep.holla@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191128101547.519-1-sudeep.holla@arm.com> References: <20191128101547.519-1-sudeep.holla@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Since commit ca74b316df96 ("arm: Use common cpu_topology structure and functions.") the core cpumask has to be modified during cpu hotplug operations. So using them to set up cpufreq policy cpumask may be incorrect as it may contain only cpus that are online at that instance. Instead, we can use the cpumask setup by OPP library that contains all the cpus sharing OPP table using dev_pm_opp_get_sharing_cpus. Cc: Viresh Kumar Tested-by: Dietmar Eggemann Signed-off-by: Sudeep Holla Acked-by: Viresh Kumar --- drivers/cpufreq/vexpress-spc-cpufreq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/cpufreq/vexpress-spc-cpufreq.c b/drivers/cpufreq/vexpress-spc-cpufreq.c index 506e3f2bf53a..83c85d3d67e3 100644 --- a/drivers/cpufreq/vexpress-spc-cpufreq.c +++ b/drivers/cpufreq/vexpress-spc-cpufreq.c @@ -434,7 +434,7 @@ static int ve_spc_cpufreq_init(struct cpufreq_policy *policy) if (cur_cluster < MAX_CLUSTERS) { int cpu; - cpumask_copy(policy->cpus, topology_core_cpumask(policy->cpu)); + dev_pm_opp_get_sharing_cpus(cpu_dev, policy->cpus); for_each_cpu(cpu, policy->cpus) per_cpu(physical_cluster, cpu) = cur_cluster;