From patchwork Thu Sep 8 22:26:55 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: srinivas pandruvada X-Patchwork-Id: 9322193 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BD16C607D3 for ; Thu, 8 Sep 2016 22:27:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C4E6A29A1A for ; Thu, 8 Sep 2016 22:27:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B9DEE29A22; Thu, 8 Sep 2016 22:27:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A01929A1A for ; Thu, 8 Sep 2016 22:27:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965299AbcIHW1Q (ORCPT ); Thu, 8 Sep 2016 18:27:16 -0400 Received: from mga06.intel.com ([134.134.136.31]:31508 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964957AbcIHW1I (ORCPT ); Thu, 8 Sep 2016 18:27:08 -0400 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP; 08 Sep 2016 15:27:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,302,1470726000"; d="scan'208";a="1053344273" Received: from spandruv-desk.jf.intel.com ([10.54.75.13]) by fmsmga002.fm.intel.com with ESMTP; 08 Sep 2016 15:27:02 -0700 From: Srinivas Pandruvada To: rjw@rjwysocki.net, tglx@linutronix.de, mingo@redhat.com, bp@suse.de Cc: x86@kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, peterz@infradead.org, tim.c.chen@linux.intel.com, Srinivas Pandruvada Subject: [PATCH v3 8/8] cpufreq: intel_pstate: Use CPPC to get max performance Date: Thu, 8 Sep 2016 15:26:55 -0700 Message-Id: <1473373615-51427-9-git-send-email-srinivas.pandruvada@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1473373615-51427-1-git-send-email-srinivas.pandruvada@linux.intel.com> References: <1473373615-51427-1-git-send-email-srinivas.pandruvada@linux.intel.com> Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This change uses acpi cppc_lib interface to get CPPC performance limits. Once CPPC limits of all online cores are read, first check if there is difference in max performance. If there is a difference, then the scheduler interface is called to update per cpu priority. After updating priority of all current cpus, the itmt feature is enabled. Signed-off-by: Srinivas Pandruvada --- drivers/cpufreq/Kconfig.x86 | 1 + drivers/cpufreq/intel_pstate.c | 75 ++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 73 insertions(+), 3 deletions(-) diff --git a/drivers/cpufreq/Kconfig.x86 b/drivers/cpufreq/Kconfig.x86 index adbd1de..3328c6b 100644 --- a/drivers/cpufreq/Kconfig.x86 +++ b/drivers/cpufreq/Kconfig.x86 @@ -6,6 +6,7 @@ config X86_INTEL_PSTATE bool "Intel P state control" depends on X86 select ACPI_PROCESSOR if ACPI + select ACPI_CPPC_LIB if X86_64 && ACPI help This driver provides a P state for Intel core processors. The driver implements an internal governor and will become diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index bdbe936..a0bf244 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -44,6 +44,7 @@ #ifdef CONFIG_ACPI #include +#include #endif #define FRAC_BITS 8 @@ -193,6 +194,8 @@ struct _pid { * @sample: Storage for storing last Sample data * @acpi_perf_data: Stores ACPI perf information read from _PSS * @valid_pss_table: Set to true for valid ACPI _PSS entries found + * @cppc_data: Stores CPPC information for HWP capable CPUs + * @valid_cppc_table: Set to true for valid CPPC entries are found * * This structure stores per CPU instance data for all CPUs. */ @@ -215,6 +218,8 @@ struct cpudata { #ifdef CONFIG_ACPI struct acpi_processor_performance acpi_perf_data; bool valid_pss_table; + struct cppc_cpudata *cppc_data; + bool valid_cppc_table; #endif }; @@ -361,6 +366,15 @@ static struct perf_limits *limits = &powersave_limits; #endif #ifdef CONFIG_ACPI +static cpumask_t cppc_rd_cpu_mask; + +/* Call set_sched_itmt from a work function to be able to use hotplug locks */ +static void intel_pstste_sched_itmt_work_fn(struct work_struct *work) +{ + set_sched_itmt(true); +} + +static DECLARE_WORK(sched_itmt_work, intel_pstste_sched_itmt_work_fn); static bool intel_pstate_get_ppc_enable_status(void) { @@ -377,14 +391,63 @@ static void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy) int ret; int i; - if (hwp_active) + cpu = all_cpu_data[policy->cpu]; + + if (hwp_active) { + struct cppc_perf_caps *perf_caps; + + cpu->cppc_data = kzalloc(sizeof(struct cppc_cpudata), + GFP_KERNEL); + if (!cpu->cppc_data) + return; + + perf_caps = &cpu->cppc_data->perf_caps; + ret = cppc_get_perf_caps(policy->cpu, perf_caps); + if (ret) { + kfree(cpu->cppc_data); + return; + } + + cpu->valid_cppc_table = true; + pr_debug("cpu:%d H:0x%x N:0x%x L:0x%x\n", policy->cpu, + perf_caps->highest_perf, perf_caps->nominal_perf, + perf_caps->lowest_perf); + + cpumask_set_cpu(policy->cpu, &cppc_rd_cpu_mask); + if (cpumask_subset(topology_core_cpumask(policy->cpu), + &cppc_rd_cpu_mask)) { + int cpu_index; + int max_prio; + bool itmt_support = false; + + cpu = all_cpu_data[0]; + max_prio = cpu->cppc_data->perf_caps.highest_perf; + for_each_cpu(cpu_index, &cppc_rd_cpu_mask) { + cpu = all_cpu_data[cpu_index]; + perf_caps = &cpu->cppc_data->perf_caps; + if (max_prio != perf_caps->highest_perf) { + itmt_support = true; + break; + } + } + + if (!itmt_support) + return; + + for_each_cpu(cpu_index, &cppc_rd_cpu_mask) { + cpu = all_cpu_data[cpu_index]; + perf_caps = &cpu->cppc_data->perf_caps; + sched_set_itmt_core_prio( + perf_caps->highest_perf, cpu_index); + } + schedule_work(&sched_itmt_work); + } return; + } if (!intel_pstate_get_ppc_enable_status()) return; - cpu = all_cpu_data[policy->cpu]; - ret = acpi_processor_register_performance(&cpu->acpi_perf_data, policy->cpu); if (ret) @@ -444,6 +507,12 @@ static void intel_pstate_exit_perf_limits(struct cpufreq_policy *policy) struct cpudata *cpu; cpu = all_cpu_data[policy->cpu]; + + if (cpu->valid_cppc_table) { + cpumask_clear_cpu(policy->cpu, &cppc_rd_cpu_mask); + kfree(cpu->cppc_data); + } + if (!cpu->valid_pss_table) return;