From patchwork Tue Jul 7 18:24:24 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 6738221 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A436CC05AC for ; Tue, 7 Jul 2015 18:57:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 96A15206EC for ; Tue, 7 Jul 2015 18:57:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9535120626 for ; Tue, 7 Jul 2015 18:57:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934445AbbGGSyN (ORCPT ); Tue, 7 Jul 2015 14:54:13 -0400 Received: from foss.arm.com ([217.140.101.70]:37728 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932979AbbGGSXq (ORCPT ); Tue, 7 Jul 2015 14:23:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C797C62F; Tue, 7 Jul 2015 11:24:12 -0700 (PDT) Received: from e105550-lin.cambridge.arm.com (e105550-lin.cambridge.arm.com [10.2.131.193]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0FC1A3F23A; Tue, 7 Jul 2015 11:23:42 -0700 (PDT) From: Morten Rasmussen To: peterz@infradead.org, mingo@redhat.com Cc: vincent.guittot@linaro.org, daniel.lezcano@linaro.org, Dietmar Eggemann , yuyang.du@intel.com, mturquette@baylibre.com, rjw@rjwysocki.net, Juri Lelli , sgurrappadi@nvidia.com, pang.xunlei@zte.com.cn, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Juri Lelli Subject: [RFCv5 PATCH 41/46] sched/fair: add triggers for OPP change requests Date: Tue, 7 Jul 2015 19:24:24 +0100 Message-Id: <1436293469-25707-42-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1436293469-25707-1-git-send-email-morten.rasmussen@arm.com> References: <1436293469-25707-1-git-send-email-morten.rasmussen@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Juri Lelli Each time a task is {en,de}queued we might need to adapt the current frequency to the new usage. Add triggers on {en,de}queue_task_fair() for this purpose. Only trigger a freq request if we are effectively waking up or going to sleep. Filter out load balancing related calls to reduce the number of triggers. cc: Ingo Molnar cc: Peter Zijlstra Signed-off-by: Juri Lelli --- kernel/sched/fair.c | 42 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 40 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f74e9d2..b8627c6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4281,7 +4281,10 @@ static inline void hrtick_update(struct rq *rq) } #endif +static unsigned int capacity_margin = 1280; /* ~20% margin */ + static bool cpu_overutilized(int cpu); +static unsigned long get_cpu_usage(int cpu); struct static_key __sched_energy_freq __read_mostly = STATIC_KEY_INIT_FALSE; /* @@ -4332,6 +4335,26 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (!task_new && !rq->rd->overutilized && cpu_overutilized(rq->cpu)) rq->rd->overutilized = true; + /* + * We want to trigger a freq switch request only for tasks that + * are waking up; this is because we get here also during + * load balancing, but in these cases it seems wise to trigger + * as single request after load balancing is done. + * + * XXX: how about fork()? Do we need a special flag/something + * to tell if we are here after a fork() (wakeup_task_new)? + * + * Also, we add a margin (same ~20% used for the tipping point) + * to our request to provide some head room if p's utilization + * further increases. + */ + if (sched_energy_freq() && !task_new) { + unsigned long req_cap = get_cpu_usage(cpu_of(rq)); + + req_cap = req_cap * capacity_margin + >> SCHED_CAPACITY_SHIFT; + cpufreq_sched_set_cap(cpu_of(rq), req_cap); + } } hrtick_update(rq); } @@ -4393,6 +4416,23 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (!se) { sub_nr_running(rq, 1); update_rq_runnable_avg(rq, 1); + /* + * We want to trigger a freq switch request only for tasks that + * are going to sleep; this is because we get here also during + * load balancing, but in these cases it seems wise to trigger + * as single request after load balancing is done. + * + * Also, we add a margin (same ~20% used for the tipping point) + * to our request to provide some head room if p's utilization + * further increases. + */ + if (sched_energy_freq() && task_sleep) { + unsigned long req_cap = get_cpu_usage(cpu_of(rq)); + + req_cap = req_cap * capacity_margin + >> SCHED_CAPACITY_SHIFT; + cpufreq_sched_set_cap(cpu_of(rq), req_cap); + } } hrtick_update(rq); } @@ -4959,8 +4999,6 @@ static int find_new_capacity(struct energy_env *eenv, return idx; } -static unsigned int capacity_margin = 1280; /* ~20% margin */ - static bool cpu_overutilized(int cpu) { return (capacity_of(cpu) * 1024) <