From patchwork Mon Dec 14 12:38:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ionela Voinescu X-Patchwork-Id: 11972061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63B5DC4361B for ; Mon, 14 Dec 2020 12:40:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 15A06206FC for ; Mon, 14 Dec 2020 12:40:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406936AbgLNMj5 (ORCPT ); Mon, 14 Dec 2020 07:39:57 -0500 Received: from foss.arm.com ([217.140.110.172]:46724 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405045AbgLNMjy (ORCPT ); Mon, 14 Dec 2020 07:39:54 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6BE01042; Mon, 14 Dec 2020 04:39:08 -0800 (PST) Received: from e108754-lin.cambridge.arm.com (unknown [10.1.198.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 142A13F66B; Mon, 14 Dec 2020 04:39:06 -0800 (PST) From: Ionela Voinescu To: rjw@rjwysocki.net, viresh.kumar@linaro.org, lenb@kernel.org Cc: yousaf.kaukab@suse.com, jeremy.linton@arm.com, lukasz.luba@arm.com, valentin.schneider@arm.com, linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, ionela.voinescu@arm.com Subject: [PATCH v2 1/4] cppc_cpufreq: use policy->cpu as driver of frequency setting Date: Mon, 14 Dec 2020 12:38:20 +0000 Message-Id: <20201214123823.3949-2-ionela.voinescu@arm.com> X-Mailer: git-send-email 2.29.2.dirty In-Reply-To: <20201214123823.3949-1-ionela.voinescu@arm.com> References: <20201214123823.3949-1-ionela.voinescu@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Considering only the currently supported coordination types (ANY, HW, NONE), this change only makes a difference for the ANY type, when policy->cpu is hotplugged out. In that case the new policy->cpu will be different from ((struct cppc_cpudata *)policy->driver_data)->cpu. While in this case the controls of *ANY* CPU could be used to drive frequency changes, it's more consistent to use policy->cpu as the leading CPU, as used in all other cppc_cpufreq functions. Additionally, the debug prints in cppc_set_perf() would no longer create confusion when referring to a CPU that is hotplugged out. Signed-off-by: Ionela Voinescu Acked-by: Viresh Kumar Cc: Rafael J. Wysocki Cc: Viresh Kumar --- drivers/cpufreq/cppc_cpufreq.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c index 7cc9bd8568de..2700fc71d4e8 100644 --- a/drivers/cpufreq/cppc_cpufreq.c +++ b/drivers/cpufreq/cppc_cpufreq.c @@ -150,6 +150,7 @@ static int cppc_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int relation) { struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu]; + unsigned int cpu = policy->cpu; struct cpufreq_freqs freqs; u32 desired_perf; int ret = 0; @@ -164,12 +165,12 @@ static int cppc_cpufreq_set_target(struct cpufreq_policy *policy, freqs.new = target_freq; cpufreq_freq_transition_begin(policy, &freqs); - ret = cppc_set_perf(cpu_data->cpu, &cpu_data->perf_ctrls); + ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); cpufreq_freq_transition_end(policy, &freqs, ret != 0); if (ret) pr_debug("Failed to set target on CPU:%d. ret:%d\n", - cpu_data->cpu, ret); + cpu, ret); return ret; } From patchwork Mon Dec 14 12:38:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ionela Voinescu X-Patchwork-Id: 11972065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A07DEC4361B for ; Mon, 14 Dec 2020 12:40:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6CC00206FC for ; Mon, 14 Dec 2020 12:40:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2407390AbgLNMkO (ORCPT ); Mon, 14 Dec 2020 07:40:14 -0500 Received: from foss.arm.com ([217.140.110.172]:46738 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406789AbgLNMj5 (ORCPT ); Mon, 14 Dec 2020 07:39:57 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E6E4011B3; Mon, 14 Dec 2020 04:39:11 -0800 (PST) Received: from e108754-lin.cambridge.arm.com (unknown [10.1.198.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 54F323F66B; Mon, 14 Dec 2020 04:39:10 -0800 (PST) From: Ionela Voinescu To: rjw@rjwysocki.net, viresh.kumar@linaro.org, lenb@kernel.org Cc: yousaf.kaukab@suse.com, jeremy.linton@arm.com, lukasz.luba@arm.com, valentin.schneider@arm.com, linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, ionela.voinescu@arm.com Subject: [PATCH v2 2/4] cppc_cpufreq: clarify support for coordination types Date: Mon, 14 Dec 2020 12:38:21 +0000 Message-Id: <20201214123823.3949-3-ionela.voinescu@arm.com> X-Mailer: git-send-email 2.29.2.dirty In-Reply-To: <20201214123823.3949-1-ionela.voinescu@arm.com> References: <20201214123823.3949-1-ionela.voinescu@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The previous coordination type handling in the cppc_cpufreq init code created some confusion: the comment mentioned "Support only SW_ANY for now" while only the SW_ALL/ALL case resulted in a failure. The other coordination types (HW_ALL/HW, NONE) were silently supported. Clarify support for coordination types while describing in comments the intended behavior. Signed-off-by: Ionela Voinescu Acked-by: Viresh Kumar Cc: Rafael J. Wysocki Cc: Viresh Kumar --- drivers/cpufreq/cppc_cpufreq.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c index 2700fc71d4e8..f15a44c8b6b7 100644 --- a/drivers/cpufreq/cppc_cpufreq.c +++ b/drivers/cpufreq/cppc_cpufreq.c @@ -244,7 +244,7 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu]; struct cppc_perf_caps *caps = &cpu_data->perf_caps; unsigned int cpu = policy->cpu; - int ret = 0; + int i, ret = 0; cpu_data->cpu = cpu; ret = cppc_get_perf_caps(cpu, caps); @@ -281,9 +281,13 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu); policy->shared_type = cpu_data->shared_type; - if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) { - int i; - + switch (policy->shared_type) { + case CPUFREQ_SHARED_TYPE_HW: + case CPUFREQ_SHARED_TYPE_NONE: + /* Nothing to be done - we'll have a policy for each CPU */ + break; + case CPUFREQ_SHARED_TYPE_ANY: + /* All CPUs in the domain will share a policy */ cpumask_copy(policy->cpus, cpu_data->shared_cpu_map); for_each_cpu(i, policy->cpus) { @@ -293,9 +297,10 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) memcpy(&all_cpu_data[i]->perf_caps, caps, sizeof(cpu_data->perf_caps)); } - } else if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL) { - /* Support only SW_ANY for now. */ - pr_debug("Unsupported CPU co-ord type\n"); + break; + default: + pr_debug("Unsupported CPU co-ord type: %d\n", + policy->shared_type); return -EFAULT; } From patchwork Mon Dec 14 12:38:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ionela Voinescu X-Patchwork-Id: 11972075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D788C2BB40 for ; Mon, 14 Dec 2020 12:41:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D6C9E2250E for ; Mon, 14 Dec 2020 12:41:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2407357AbgLNMkI (ORCPT ); Mon, 14 Dec 2020 07:40:08 -0500 Received: from foss.arm.com ([217.140.110.172]:46750 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2407235AbgLNMkA (ORCPT ); Mon, 14 Dec 2020 07:40:00 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CC63C11FB; Mon, 14 Dec 2020 04:39:14 -0800 (PST) Received: from e108754-lin.cambridge.arm.com (unknown [10.1.198.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3AACB3F66B; Mon, 14 Dec 2020 04:39:13 -0800 (PST) From: Ionela Voinescu To: rjw@rjwysocki.net, viresh.kumar@linaro.org, lenb@kernel.org Cc: yousaf.kaukab@suse.com, jeremy.linton@arm.com, lukasz.luba@arm.com, valentin.schneider@arm.com, linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, ionela.voinescu@arm.com Subject: [PATCH v2 3/4] cppc_cpufreq: expose information on frequency domains Date: Mon, 14 Dec 2020 12:38:22 +0000 Message-Id: <20201214123823.3949-4-ionela.voinescu@arm.com> X-Mailer: git-send-email 2.29.2.dirty In-Reply-To: <20201214123823.3949-1-ionela.voinescu@arm.com> References: <20201214123823.3949-1-ionela.voinescu@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Use the existing sysfs attribute "freqdomain_cpus" to expose information to userspace about CPUs in the same frequency domain. Signed-off-by: Ionela Voinescu Acked-by: Viresh Kumar Cc: Rafael J. Wysocki Cc: Viresh Kumar --- Documentation/ABI/testing/sysfs-devices-system-cpu | 3 ++- drivers/cpufreq/cppc_cpufreq.c | 14 ++++++++++++++ 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu index 1a04ca8162ad..0eee30b27ab6 100644 --- a/Documentation/ABI/testing/sysfs-devices-system-cpu +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -264,7 +264,8 @@ Description: Discover CPUs in the same CPU frequency coordination domain attribute is useful for user space DVFS controllers to get better power/performance results for platforms using acpi-cpufreq. - This file is only present if the acpi-cpufreq driver is in use. + This file is only present if the acpi-cpufreq or the cppc-cpufreq + drivers are in use. What: /sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1} diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c index f15a44c8b6b7..40b58d2dbbc6 100644 --- a/drivers/cpufreq/cppc_cpufreq.c +++ b/drivers/cpufreq/cppc_cpufreq.c @@ -402,6 +402,19 @@ static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state) return 0; } +static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf) +{ + unsigned int cpu = policy->cpu; + + return cpufreq_show_cpus(all_cpu_data[cpu]->shared_cpu_map, buf); +} +cpufreq_freq_attr_ro(freqdomain_cpus); + +static struct freq_attr *cppc_cpufreq_attr[] = { + &freqdomain_cpus, + NULL, +}; + static struct cpufreq_driver cppc_cpufreq_driver = { .flags = CPUFREQ_CONST_LOOPS, .verify = cppc_verify_policy, @@ -410,6 +423,7 @@ static struct cpufreq_driver cppc_cpufreq_driver = { .init = cppc_cpufreq_cpu_init, .stop_cpu = cppc_cpufreq_stop_cpu, .set_boost = cppc_cpufreq_set_boost, + .attr = cppc_cpufreq_attr, .name = "cppc_cpufreq", }; From patchwork Mon Dec 14 12:38:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ionela Voinescu X-Patchwork-Id: 11972063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8137BC1B0E3 for ; Mon, 14 Dec 2020 12:40:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 515BA206FC for ; Mon, 14 Dec 2020 12:40:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2407422AbgLNMkP (ORCPT ); Mon, 14 Dec 2020 07:40:15 -0500 Received: from foss.arm.com ([217.140.110.172]:46760 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2407257AbgLNMkE (ORCPT ); Mon, 14 Dec 2020 07:40:04 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AB12412FC; Mon, 14 Dec 2020 04:39:17 -0800 (PST) Received: from e108754-lin.cambridge.arm.com (unknown [10.1.198.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EF7B43F66B; Mon, 14 Dec 2020 04:39:15 -0800 (PST) From: Ionela Voinescu To: rjw@rjwysocki.net, viresh.kumar@linaro.org, lenb@kernel.org Cc: yousaf.kaukab@suse.com, jeremy.linton@arm.com, lukasz.luba@arm.com, valentin.schneider@arm.com, linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, ionela.voinescu@arm.com Subject: [PATCH v2 4/4] cppc_cpufreq: replace per-cpu data array with a list Date: Mon, 14 Dec 2020 12:38:23 +0000 Message-Id: <20201214123823.3949-5-ionela.voinescu@arm.com> X-Mailer: git-send-email 2.29.2.dirty In-Reply-To: <20201214123823.3949-1-ionela.voinescu@arm.com> References: <20201214123823.3949-1-ionela.voinescu@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The cppc_cpudata per-cpu storage was inefficient (1) additional to causing functional issues (2) when CPUs are hotplugged out, due to per-cpu data being improperly initialised. (1) The amount of information needed for CPPC performance control in its cpufreq driver depends on the domain (PSD) coordination type: ANY: One set of CPPC control and capability data (e.g desired performance, highest/lowest performance, etc) applies to all CPUs in the domain. ALL: Same as ANY. To be noted that this type is not currently supported. When supported, information about which CPUs belong to a domain is needed in order for frequency change requests to be sent to each of them. HW: It's necessary to store CPPC control and capability information for all the CPUs. HW will then coordinate the performance state based on their limitations and requests. NONE: Same as HW. No HW coordination is expected. Despite this, the previous initialisation code would indiscriminately allocate memory for all CPUs (all_cpu_data) and unnecessarily duplicate performance capabilities and the domain sharing mask and type for each possible CPU. (2) With the current per-cpu structure, when having ANY coordination, the cppc_cpudata cpu information is not initialised (will remain 0) for all CPUs in a policy, other than policy->cpu. When policy->cpu is hotplugged out, the driver will incorrectly use the uninitialised (0) value of the other CPUs when making frequency changes. Additionally, the previous values stored in the perf_ctrls.desired_perf will be lost when policy->cpu changes. Therefore replace the array of per cpu data with a list. The memory for each structure is allocated at policy init, where a single structure can be allocated per policy, not per cpu. In order to accommodate the struct list_head node in the cppc_cpudata structure, the now unused cpu and cur_policy variables are removed. For example, on a arm64 Juno platform with 6 CPUs: (0, 1, 2, 3) in PSD1, (4, 5) in PSD2 - ANY coordination, the memory allocation comparison shows: Before patch: - ANY coordination: total slack req alloc/free caller 0 0 0 0/1 _kernel_size_le_hi32+0x0xffff800008ff7810 0 0 0 0/6 _kernel_size_le_hi32+0x0xffff800008ff7808 128 80 48 1/0 _kernel_size_le_hi32+0x0xffff800008ffc070 768 0 768 6/0 _kernel_size_le_hi32+0x0xffff800008ffc0e4 After patch: - ANY coordination: total slack req alloc/free caller 256 0 256 2/0 _kernel_size_le_hi32+0x0xffff800008fed410 0 0 0 0/2 _kernel_size_le_hi32+0x0xffff800008fed274 Additional notes: - A pointer to the policy's cppc_cpudata is stored in policy->driver_data - Driver registration is skipped if _CPC entries are not present. Signed-off-by: Ionela Voinescu Cc: Rafael J. Wysocki Cc: Viresh Kumar --- drivers/acpi/cppc_acpi.c | 141 ++++++++++++-------------- drivers/cpufreq/cppc_cpufreq.c | 174 +++++++++++++++++---------------- include/acpi/cppc_acpi.h | 6 +- 3 files changed, 154 insertions(+), 167 deletions(-) diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c index 62f55db443c1..75aaf94ae0a9 100644 --- a/drivers/acpi/cppc_acpi.c +++ b/drivers/acpi/cppc_acpi.c @@ -414,109 +414,88 @@ static int acpi_get_psd(struct cpc_desc *cpc_ptr, acpi_handle handle) return result; } +bool acpi_cpc_valid(void) +{ + struct cpc_desc *cpc_ptr; + int cpu; + + for_each_possible_cpu(cpu) { + cpc_ptr = per_cpu(cpc_desc_ptr, cpu); + if (!cpc_ptr) + return false; + } + + return true; +} +EXPORT_SYMBOL_GPL(acpi_cpc_valid); + /** - * acpi_get_psd_map - Map the CPUs in a common freq domain. - * @all_cpu_data: Ptrs to CPU specific CPPC data including PSD info. + * acpi_get_psd_map - Map the CPUs in the freq domain of a given cpu + * @cpu: Find all CPUs that share a domain with cpu. + * @cpu_data: Pointer to CPU specific CPPC data including PSD info. * * Return: 0 for success or negative value for err. */ -int acpi_get_psd_map(struct cppc_cpudata **all_cpu_data) +int acpi_get_psd_map(unsigned int cpu, struct cppc_cpudata *cpu_data) { - int count_target; - int retval = 0; - unsigned int i, j; - cpumask_var_t covered_cpus; - struct cppc_cpudata *pr, *match_pr; - struct acpi_psd_package *pdomain; - struct acpi_psd_package *match_pdomain; struct cpc_desc *cpc_ptr, *match_cpc_ptr; - - if (!zalloc_cpumask_var(&covered_cpus, GFP_KERNEL)) - return -ENOMEM; + struct acpi_psd_package *match_pdomain; + struct acpi_psd_package *pdomain; + int count_target, i; /* * Now that we have _PSD data from all CPUs, let's setup P-state * domain info. */ - for_each_possible_cpu(i) { - if (cpumask_test_cpu(i, covered_cpus)) - continue; - - pr = all_cpu_data[i]; - cpc_ptr = per_cpu(cpc_desc_ptr, i); - if (!cpc_ptr) { - retval = -EFAULT; - goto err_ret; - } + cpc_ptr = per_cpu(cpc_desc_ptr, cpu); + if (!cpc_ptr) + return -EFAULT; - pdomain = &(cpc_ptr->domain_info); - cpumask_set_cpu(i, pr->shared_cpu_map); - cpumask_set_cpu(i, covered_cpus); - if (pdomain->num_processors <= 1) - continue; + pdomain = &(cpc_ptr->domain_info); + cpumask_set_cpu(cpu, cpu_data->shared_cpu_map); + if (pdomain->num_processors <= 1) + return 0; - /* Validate the Domain info */ - count_target = pdomain->num_processors; - if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ALL) - pr->shared_type = CPUFREQ_SHARED_TYPE_ALL; - else if (pdomain->coord_type == DOMAIN_COORD_TYPE_HW_ALL) - pr->shared_type = CPUFREQ_SHARED_TYPE_HW; - else if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ANY) - pr->shared_type = CPUFREQ_SHARED_TYPE_ANY; - - for_each_possible_cpu(j) { - if (i == j) - continue; - - match_cpc_ptr = per_cpu(cpc_desc_ptr, j); - if (!match_cpc_ptr) { - retval = -EFAULT; - goto err_ret; - } + /* Validate the Domain info */ + count_target = pdomain->num_processors; + if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ALL) + cpu_data->shared_type = CPUFREQ_SHARED_TYPE_ALL; + else if (pdomain->coord_type == DOMAIN_COORD_TYPE_HW_ALL) + cpu_data->shared_type = CPUFREQ_SHARED_TYPE_HW; + else if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ANY) + cpu_data->shared_type = CPUFREQ_SHARED_TYPE_ANY; - match_pdomain = &(match_cpc_ptr->domain_info); - if (match_pdomain->domain != pdomain->domain) - continue; + for_each_possible_cpu(i) { + if (i == cpu) + continue; - /* Here i and j are in the same domain */ - if (match_pdomain->num_processors != count_target) { - retval = -EFAULT; - goto err_ret; - } + match_cpc_ptr = per_cpu(cpc_desc_ptr, i); + if (!match_cpc_ptr) + goto err_fault; - if (pdomain->coord_type != match_pdomain->coord_type) { - retval = -EFAULT; - goto err_ret; - } + match_pdomain = &(match_cpc_ptr->domain_info); + if (match_pdomain->domain != pdomain->domain) + continue; - cpumask_set_cpu(j, covered_cpus); - cpumask_set_cpu(j, pr->shared_cpu_map); - } + /* Here i and cpu are in the same domain */ + if (match_pdomain->num_processors != count_target) + goto err_fault; - for_each_cpu(j, pr->shared_cpu_map) { - if (i == j) - continue; + if (pdomain->coord_type != match_pdomain->coord_type) + goto err_fault; - match_pr = all_cpu_data[j]; - match_pr->shared_type = pr->shared_type; - cpumask_copy(match_pr->shared_cpu_map, - pr->shared_cpu_map); - } + cpumask_set_cpu(i, cpu_data->shared_cpu_map); } - goto out; -err_ret: - for_each_possible_cpu(i) { - pr = all_cpu_data[i]; + return 0; - /* Assume no coordination on any error parsing domain info */ - cpumask_clear(pr->shared_cpu_map); - cpumask_set_cpu(i, pr->shared_cpu_map); - pr->shared_type = CPUFREQ_SHARED_TYPE_NONE; - } -out: - free_cpumask_var(covered_cpus); - return retval; +err_fault: + /* Assume no coordination on any error parsing domain info */ + cpumask_clear(cpu_data->shared_cpu_map); + cpumask_set_cpu(cpu, cpu_data->shared_cpu_map); + cpu_data->shared_type = CPUFREQ_SHARED_TYPE_NONE; + + return -EFAULT; } EXPORT_SYMBOL_GPL(acpi_get_psd_map); diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c index 40b58d2dbbc6..8a482c434ea6 100644 --- a/drivers/cpufreq/cppc_cpufreq.c +++ b/drivers/cpufreq/cppc_cpufreq.c @@ -30,13 +30,13 @@ #define DMI_PROCESSOR_MAX_SPEED 0x14 /* - * These structs contain information parsed from per CPU - * ACPI _CPC structures. - * e.g. For each CPU the highest, lowest supported - * performance capabilities, desired performance level - * requested etc. + * This list contains information parsed from per CPU ACPI _CPC and _PSD + * structures: e.g. the highest and lowest supported performance, capabilities, + * desired performance, level requested etc. Depending on the share_type, not + * all CPUs will have an entry in the list. */ -static struct cppc_cpudata **all_cpu_data; +static LIST_HEAD(cpu_data_list); + static bool boost_supported; struct cppc_workaround_oem_info { @@ -148,8 +148,9 @@ static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu_data, static int cppc_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int target_freq, unsigned int relation) + { - struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu]; + struct cppc_cpudata *cpu_data = policy->driver_data; unsigned int cpu = policy->cpu; struct cpufreq_freqs freqs; u32 desired_perf; @@ -183,7 +184,7 @@ static int cppc_verify_policy(struct cpufreq_policy_data *policy) static void cppc_cpufreq_stop_cpu(struct cpufreq_policy *policy) { - struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu]; + struct cppc_cpudata *cpu_data = policy->driver_data; struct cppc_perf_caps *caps = &cpu_data->perf_caps; unsigned int cpu = policy->cpu; int ret; @@ -194,6 +195,12 @@ static void cppc_cpufreq_stop_cpu(struct cpufreq_policy *policy) if (ret) pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", caps->lowest_perf, cpu, ret); + + /* Remove CPU node from list and free driver data for policy */ + free_cpumask_var(cpu_data->shared_cpu_map); + list_del(&cpu_data->node); + kfree(policy->driver_data); + policy->driver_data = NULL; } /* @@ -239,25 +246,61 @@ static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu) } #endif -static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) + +static struct cppc_cpudata *cppc_cpufreq_get_cpu_data(unsigned int cpu) { - struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu]; - struct cppc_perf_caps *caps = &cpu_data->perf_caps; - unsigned int cpu = policy->cpu; - int i, ret = 0; + struct cppc_cpudata *cpu_data; + int ret; + + cpu_data = kzalloc(sizeof(struct cppc_cpudata), GFP_KERNEL); + if (!cpu_data) + goto out; - cpu_data->cpu = cpu; - ret = cppc_get_perf_caps(cpu, caps); + if (!zalloc_cpumask_var(&cpu_data->shared_cpu_map, GFP_KERNEL)) + goto free_cpu; + ret = acpi_get_psd_map(cpu, cpu_data); if (ret) { - pr_debug("Err reading CPU%d perf capabilities. ret:%d\n", - cpu, ret); - return ret; + pr_debug("Err parsing CPU%d PSD data: ret:%d\n", cpu, ret); + goto free_mask; + } + + ret = cppc_get_perf_caps(cpu, &cpu_data->perf_caps); + if (ret) { + pr_debug("Err reading CPU%d perf caps: ret:%d\n", cpu, ret); + goto free_mask; } /* Convert the lowest and nominal freq from MHz to KHz */ - caps->lowest_freq *= 1000; - caps->nominal_freq *= 1000; + cpu_data->perf_caps.lowest_freq *= 1000; + cpu_data->perf_caps.nominal_freq *= 1000; + + list_add(&cpu_data->node, &cpu_data_list); + + return cpu_data; + +free_mask: + free_cpumask_var(cpu_data->shared_cpu_map); +free_cpu: + kfree(cpu_data); +out: + return NULL; +} + +static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) +{ + unsigned int cpu = policy->cpu; + struct cppc_cpudata *cpu_data; + struct cppc_perf_caps *caps; + int ret; + + cpu_data = cppc_cpufreq_get_cpu_data(cpu); + if (!cpu_data) { + pr_err("Error in acquiring _CPC/_PSD data for CPU%d.\n", cpu); + return -ENODEV; + } + caps = &cpu_data->perf_caps; + policy->driver_data = cpu_data; /* * Set min to lowest nonlinear perf to avoid any efficiency penalty (see @@ -287,16 +330,12 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) /* Nothing to be done - we'll have a policy for each CPU */ break; case CPUFREQ_SHARED_TYPE_ANY: - /* All CPUs in the domain will share a policy */ + /* + * All CPUs in the domain will share a policy and all cpufreq + * operations will use a single cppc_cpudata structure stored + * in policy->driver_data. + */ cpumask_copy(policy->cpus, cpu_data->shared_cpu_map); - - for_each_cpu(i, policy->cpus) { - if (unlikely(i == cpu)) - continue; - - memcpy(&all_cpu_data[i]->perf_caps, caps, - sizeof(cpu_data->perf_caps)); - } break; default: pr_debug("Unsupported CPU co-ord type: %d\n", @@ -304,8 +343,6 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) return -EFAULT; } - cpu_data->cur_policy = policy; - /* * If 'highest_perf' is greater than 'nominal_perf', we assume CPU Boost * is supported. @@ -360,9 +397,12 @@ static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data, static unsigned int cppc_cpufreq_get_rate(unsigned int cpu) { struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0}; - struct cppc_cpudata *cpu_data = all_cpu_data[cpu]; + struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); + struct cppc_cpudata *cpu_data = policy->driver_data; int ret; + cpufreq_cpu_put(policy); + ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0); if (ret) return ret; @@ -378,7 +418,7 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu) static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state) { - struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu]; + struct cppc_cpudata *cpu_data = policy->driver_data; struct cppc_perf_caps *caps = &cpu_data->perf_caps; int ret; @@ -404,9 +444,9 @@ static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state) static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf) { - unsigned int cpu = policy->cpu; + struct cppc_cpudata *cpu_data = policy->driver_data; - return cpufreq_show_cpus(all_cpu_data[cpu]->shared_cpu_map, buf); + return cpufreq_show_cpus(cpu_data->shared_cpu_map, buf); } cpufreq_freq_attr_ro(freqdomain_cpus); @@ -435,10 +475,13 @@ static struct cpufreq_driver cppc_cpufreq_driver = { */ static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu) { - struct cppc_cpudata *cpu_data = all_cpu_data[cpu]; + struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); + struct cppc_cpudata *cpu_data = policy->driver_data; u64 desired_perf; int ret; + cpufreq_cpu_put(policy); + ret = cppc_get_desired_perf(cpu, &desired_perf); if (ret < 0) return -EIO; @@ -471,68 +514,33 @@ static void cppc_check_hisi_workaround(void) static int __init cppc_cpufreq_init(void) { - struct cppc_cpudata *cpu_data; - int i, ret = 0; - - if (acpi_disabled) + if ((acpi_disabled) || !acpi_cpc_valid()) return -ENODEV; - all_cpu_data = kcalloc(num_possible_cpus(), sizeof(void *), - GFP_KERNEL); - if (!all_cpu_data) - return -ENOMEM; - - for_each_possible_cpu(i) { - all_cpu_data[i] = kzalloc(sizeof(struct cppc_cpudata), GFP_KERNEL); - if (!all_cpu_data[i]) - goto out; - - cpu_data = all_cpu_data[i]; - if (!zalloc_cpumask_var(&cpu_data->shared_cpu_map, GFP_KERNEL)) - goto out; - } - - ret = acpi_get_psd_map(all_cpu_data); - if (ret) { - pr_debug("Error parsing PSD data. Aborting cpufreq registration.\n"); - goto out; - } + INIT_LIST_HEAD(&cpu_data_list); cppc_check_hisi_workaround(); - ret = cpufreq_register_driver(&cppc_cpufreq_driver); - if (ret) - goto out; + return cpufreq_register_driver(&cppc_cpufreq_driver); +} - return ret; +static inline void free_cpu_data(void) +{ + struct cppc_cpudata *iter, *tmp; -out: - for_each_possible_cpu(i) { - cpu_data = all_cpu_data[i]; - if (!cpu_data) - break; - free_cpumask_var(cpu_data->shared_cpu_map); - kfree(cpu_data); + list_for_each_entry_safe(iter, tmp, &cpu_data_list, node) { + free_cpumask_var(iter->shared_cpu_map); + list_del(&iter->node); + kfree(iter); } - kfree(all_cpu_data); - return -ENODEV; } static void __exit cppc_cpufreq_exit(void) { - struct cppc_cpudata *cpu_data; - int i; - cpufreq_unregister_driver(&cppc_cpufreq_driver); - for_each_possible_cpu(i) { - cpu_data = all_cpu_data[i]; - free_cpumask_var(cpu_data->shared_cpu_map); - kfree(cpu_data); - } - - kfree(all_cpu_data); + free_cpu_data(); } module_exit(cppc_cpufreq_exit); diff --git a/include/acpi/cppc_acpi.h b/include/acpi/cppc_acpi.h index a6a9373ab863..232838d28f50 100644 --- a/include/acpi/cppc_acpi.h +++ b/include/acpi/cppc_acpi.h @@ -124,11 +124,10 @@ struct cppc_perf_fb_ctrs { /* Per CPU container for runtime CPPC management. */ struct cppc_cpudata { - int cpu; + struct list_head node; struct cppc_perf_caps perf_caps; struct cppc_perf_ctrls perf_ctrls; struct cppc_perf_fb_ctrs perf_fb_ctrs; - struct cpufreq_policy *cur_policy; unsigned int shared_type; cpumask_var_t shared_cpu_map; }; @@ -137,7 +136,8 @@ extern int cppc_get_desired_perf(int cpunum, u64 *desired_perf); extern int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs); extern int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls); extern int cppc_get_perf_caps(int cpu, struct cppc_perf_caps *caps); -extern int acpi_get_psd_map(struct cppc_cpudata **); +extern bool acpi_cpc_valid(void); +extern int acpi_get_psd_map(unsigned int cpu, struct cppc_cpudata *cpu_data); extern unsigned int cppc_get_transition_latency(int cpu); extern bool cpc_ffh_supported(void); extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val);