From patchwork Tue Mar 20 09:43:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dietmar Eggemann X-Patchwork-Id: 10296743 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B876660349 for ; Tue, 20 Mar 2018 09:45:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A87FB28F19 for ; Tue, 20 Mar 2018 09:45:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9C5D528F4B; Tue, 20 Mar 2018 09:45:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E146828F19 for ; Tue, 20 Mar 2018 09:45:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752390AbeCTJoU (ORCPT ); Tue, 20 Mar 2018 05:44:20 -0400 Received: from foss.arm.com ([217.140.101.70]:37894 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752545AbeCTJoS (ORCPT ); Tue, 20 Mar 2018 05:44:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0164915BE; Tue, 20 Mar 2018 02:44:18 -0700 (PDT) Received: from e107985-lin.cambridge.arm.com (e107985-lin.cambridge.arm.com [10.1.210.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7B51A3F487; Tue, 20 Mar 2018 02:44:15 -0700 (PDT) From: Dietmar Eggemann To: linux-kernel@vger.kernel.org, Peter Zijlstra , Quentin Perret , Thara Gopinath Cc: linux-pm@vger.kernel.org, Morten Rasmussen , Chris Redpath , Patrick Bellasi , Valentin Schneider , "Rafael J . Wysocki" , Greg Kroah-Hartman , Vincent Guittot , Viresh Kumar , Todd Kjos , Joel Fernandes Subject: [RFC PATCH 4/6] sched/fair: Introduce an energy estimation helper function Date: Tue, 20 Mar 2018 09:43:10 +0000 Message-Id: <20180320094312.24081-5-dietmar.eggemann@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180320094312.24081-1-dietmar.eggemann@arm.com> References: <20180320094312.24081-1-dietmar.eggemann@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Quentin Perret In preparation for the definition of an energy-aware wakeup path, a helper function is provided to estimate the consequence on system energy when a specific task wakes-up on a specific CPU. compute_energy() estimates the OPPs to be reached by all frequency domains and estimates the consumption of each online CPU according to its energy model and its percentage of busy time. Cc: Ingo Molnar Cc: Peter Zijlstra Signed-off-by: Quentin Perret Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 81 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6c72a5e7b1b0..76bd46502486 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6409,6 +6409,30 @@ static inline int cpu_overutilized(int cpu) } /* + * Returns the util of "cpu" if "p" wakes up on "dst_cpu". + */ +static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu) +{ + unsigned long util = cpu_rq(cpu)->cfs.avg.util_avg; + unsigned long capacity = capacity_orig_of(cpu); + + /* + * If p is where it should be, or if it has no impact on cpu, there is + * not much to do. + */ + if ((task_cpu(p) == dst_cpu) || (cpu != task_cpu(p) && cpu != dst_cpu)) + goto clamp_util; + + if (dst_cpu == cpu) + util += task_util(p); + else + util = max_t(long, util - task_util(p), 0); + +clamp_util: + return (util >= capacity) ? capacity : util; +} + +/* * Disable WAKE_AFFINE in the case where task @p doesn't fit in the * capacity of either the waking CPU @cpu or the previous CPU @prev_cpu. * @@ -6432,6 +6456,63 @@ static int wake_cap(struct task_struct *p, int cpu, int prev_cpu) return !util_fits_capacity(task_util(p), min_cap); } +static struct capacity_state *find_cap_state(int cpu, unsigned long util) +{ + struct sched_energy_model *em = *per_cpu_ptr(energy_model, cpu); + struct capacity_state *cs = NULL; + int i; + + /* + * As the goal is to estimate the OPP reached for a specific util + * value, mimic the behaviour of schedutil with a 1.25 coefficient + */ + util += util >> 2; + + for (i = 0; i < em->nr_cap_states; i++) { + cs = &em->cap_states[i]; + if (cs->cap >= util) + break; + } + + return cs; +} + +static unsigned long compute_energy(struct task_struct *p, int dst_cpu) +{ + unsigned long util, fdom_max_util; + struct capacity_state *cs; + unsigned long energy = 0; + struct freq_domain *fdom; + int cpu; + + for_each_freq_domain(fdom) { + fdom_max_util = 0; + for_each_cpu_and(cpu, &(fdom->span), cpu_online_mask) { + util = cpu_util_next(cpu, p, dst_cpu); + fdom_max_util = max(util, fdom_max_util); + } + + /* + * Here we assume that the capacity states of CPUs belonging to + * the same frequency domains are shared. Hence, we look at the + * capacity state of the first CPU and re-use it for all. + */ + cpu = cpumask_first(&(fdom->span)); + cs = find_cap_state(cpu, fdom_max_util); + + /* + * The energy consumed by each CPU is derived from the power + * it dissipates at the expected OPP and its percentage of + * busy time. + */ + for_each_cpu_and(cpu, &(fdom->span), cpu_online_mask) { + util = cpu_util_next(cpu, p, dst_cpu); + energy += cs->power * util / cs->cap; + } + } + return energy; +} + /* * select_task_rq_fair: Select target runqueue for the waking task in domains * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,