From patchwork Mon May 9 21:20:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 9050031 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 2898ABF29F for ; Mon, 9 May 2016 21:21:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 242AD2012B for ; Mon, 9 May 2016 21:21:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0EDF320123 for ; Mon, 9 May 2016 21:21:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752709AbcEIVUX (ORCPT ); Mon, 9 May 2016 17:20:23 -0400 Received: from mail-pa0-f46.google.com ([209.85.220.46]:33223 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752582AbcEIVUU (ORCPT ); Mon, 9 May 2016 17:20:20 -0400 Received: by mail-pa0-f46.google.com with SMTP id xk12so77043589pac.0 for ; Mon, 09 May 2016 14:20:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qHPFKn4TN1x47kPOgGA0McfqRNL9O5tc0pCmmHiXlyI=; b=VWe8s5WAYnSle1tdxUr/1alFMzVCWD9j+snA9YXzfJ6LUF25uPejaGRmr68YT7ZiJ0 /ym+vXnvdPoz46q8AV22GzOId4ojBXpzwGq+ia4xEPviJwql62rV1I+CtVk30cpehO4K s7gZcjsM2TrOBGEvAVlT1JYrw00R0gkcGZ8hw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qHPFKn4TN1x47kPOgGA0McfqRNL9O5tc0pCmmHiXlyI=; b=EXWj8DSmzYjrRRPWwgCfBzqK6OVcAtcAwyQAjwkcG5aVpx3eyDFtTh4zZuSfAlZ9ZJ YYO/b9dE7gkZVcr0qlLQlNfyOnaxMqkyPAt8rEbjcN6Z1nn4WCksyUCykPd06X8cs867 wJRRf6FwMMh8LuVqtfS5aTCkMdR9LORkY+2lmLA3PZVoizaD/+1p7IiIDHgo8SPUzUiV 3IbdoCJcedUQ8nQJ2XffDoqGWqS5cECUfjeJ1lWUL9PqdY/DZayLtDWeUipI2VHQqLhb l8PoxUmNxZ1+m1RWFXTD8FB0xaA2aZV+tKl4rr3GWngeXFYR2Dz67d44uBMxjUUdFlCS oMoQ== X-Gm-Message-State: AOPr4FUsi9v7CPAmIf/M9RGGLsKb56Q7us4uYUqQQ/kkMv/GuJe2/1/xRY6d7XtO+C6v8wVP X-Received: by 10.66.118.106 with SMTP id kl10mr52782597pab.78.1462828819984; Mon, 09 May 2016 14:20:19 -0700 (PDT) Received: from graphite.smuckle.net (cpe-76-167-105-107.san.res.rr.com. [76.167.105.107]) by smtp.gmail.com with ESMTPSA id g5sm42815345pac.1.2016.05.09.14.20.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 09 May 2016 14:20:19 -0700 (PDT) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar , "Rafael J. Wysocki" Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette , Viresh Kumar , Srinivas Pandruvada , Len Brown Subject: [PATCH 2/5] cpufreq: schedutil: support scheduler cpufreq callbacks on remote CPUs Date: Mon, 9 May 2016 14:20:11 -0700 Message-Id: <1462828814-32530-3-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1462828814-32530-1-git-send-email-smuckle@linaro.org> References: <1462828814-32530-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-8.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation for the scheduler cpufreq callback happening on remote CPUs, add support for this in schedutil. Schedutil currently requires the callback occur on the CPU being updated in order to support fast frequency switches. Remove this limitation by checking for the current CPU being outside the target CPU's cpufreq policy and if this is the case, enqueuing an irq_work on the target CPU. The irq_work for schedutil is modified to carry out a fast frequency switch if that is enabled for the policy. If the callback occurs on a CPU within the target CPU's policy, the transition is carried out on the local CPU. Signed-off-by: Steve Muckle --- kernel/sched/cpufreq_schedutil.c | 86 ++++++++++++++++++++++++++++++---------- 1 file changed, 65 insertions(+), 21 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 154ae3a51e86..c81f9432f520 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -76,27 +76,61 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) return delta_ns >= sg_policy->freq_update_delay_ns; } -static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, +static void sugov_fast_switch(struct sugov_policy *sg_policy, int cpu, + unsigned int next_freq) +{ + struct cpufreq_policy *policy = sg_policy->policy; + + next_freq = cpufreq_driver_fast_switch(policy, next_freq); + if (next_freq == CPUFREQ_ENTRY_INVALID) + return; + + policy->cur = next_freq; + trace_cpu_frequency(next_freq, cpu); +} + +#ifdef CONFIG_SMP +static inline bool sugov_queue_remote_callback(struct sugov_policy *sg_policy, + int cpu) +{ + struct cpufreq_policy *policy = sg_policy->policy; + + if (!cpumask_test_cpu(smp_processor_id(), policy->cpus)) { + sg_policy->work_in_progress = true; + irq_work_queue_on(&sg_policy->irq_work, cpu); + return true; + } + + return false; +} +#else +static inline bool sugov_queue_remote_callback(struct sugov_policy *sg_policy, + int cpu) +{ + return false; +} +#endif + +static void sugov_update_commit(struct sugov_cpu *sg_cpu, int cpu, u64 time, unsigned int next_freq) { + struct sugov_policy *sg_policy = sg_cpu->sg_policy; struct cpufreq_policy *policy = sg_policy->policy; sg_policy->last_freq_update_time = time; + if (sg_policy->next_freq == next_freq) { + trace_cpu_frequency(policy->cur, cpu); + return; + } + sg_policy->next_freq = next_freq; + + if (sugov_queue_remote_callback(sg_policy, cpu)) + return; + if (policy->fast_switch_enabled) { - if (sg_policy->next_freq == next_freq) { - trace_cpu_frequency(policy->cur, smp_processor_id()); - return; - } - sg_policy->next_freq = next_freq; - next_freq = cpufreq_driver_fast_switch(policy, next_freq); - if (next_freq == CPUFREQ_ENTRY_INVALID) - return; - - policy->cur = next_freq; - trace_cpu_frequency(next_freq, smp_processor_id()); - } else if (sg_policy->next_freq != next_freq) { - sg_policy->next_freq = next_freq; + sugov_fast_switch(sg_policy, cpu, next_freq); + } else { sg_policy->work_in_progress = true; irq_work_queue(&sg_policy->irq_work); } @@ -142,12 +176,13 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, next_f = util == ULONG_MAX ? policy->cpuinfo.max_freq : get_next_freq(policy, util, max); - sugov_update_commit(sg_policy, time, next_f); + sugov_update_commit(sg_cpu, hook->cpu, time, next_f); } -static unsigned int sugov_next_freq_shared(struct sugov_policy *sg_policy, +static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, unsigned long util, unsigned long max) { + struct sugov_policy *sg_policy = sg_cpu->sg_policy; struct cpufreq_policy *policy = sg_policy->policy; unsigned int max_f = policy->cpuinfo.max_freq; u64 last_freq_update_time = sg_policy->last_freq_update_time; @@ -161,10 +196,10 @@ static unsigned int sugov_next_freq_shared(struct sugov_policy *sg_policy, unsigned long j_util, j_max; s64 delta_ns; - if (j == smp_processor_id()) + j_sg_cpu = &per_cpu(sugov_cpu, j); + if (j_sg_cpu == sg_cpu) continue; - j_sg_cpu = &per_cpu(sugov_cpu, j); /* * If the CPU utilization was last updated before the previous * frequency update and the time elapsed between the last update @@ -204,8 +239,8 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, sg_cpu->last_update = time; if (sugov_should_update_freq(sg_policy, time)) { - next_f = sugov_next_freq_shared(sg_policy, util, max); - sugov_update_commit(sg_policy, time, next_f); + next_f = sugov_next_freq_shared(sg_cpu, util, max); + sugov_update_commit(sg_cpu, hook->cpu, time, next_f); } raw_spin_unlock(&sg_policy->update_lock); @@ -226,9 +261,18 @@ static void sugov_work(struct work_struct *work) static void sugov_irq_work(struct irq_work *irq_work) { struct sugov_policy *sg_policy; + struct cpufreq_policy *policy; sg_policy = container_of(irq_work, struct sugov_policy, irq_work); - schedule_work_on(smp_processor_id(), &sg_policy->work); + policy = sg_policy->policy; + + if (policy->fast_switch_enabled) { + sugov_fast_switch(sg_policy, smp_processor_id(), + sg_policy->next_freq); + sg_policy->work_in_progress = false; + } else { + schedule_work_on(smp_processor_id(), &sg_policy->work); + } } /************************** sysfs interface ************************/