From patchwork Tue Jan 15 10:14:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764231 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0939C14E5 for ; Tue, 15 Jan 2019 10:17:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECD832B810 for ; Tue, 15 Jan 2019 10:17:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E07282B90B; Tue, 15 Jan 2019 10:17:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7179E2B810 for ; Tue, 15 Jan 2019 10:17:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728547AbfAOKPb (ORCPT ); Tue, 15 Jan 2019 05:15:31 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46790 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727238AbfAOKPb (ORCPT ); Tue, 15 Jan 2019 05:15:31 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0FB381596; Tue, 15 Jan 2019 02:15:30 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id ED5D43F70D; Tue, 15 Jan 2019 02:15:26 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 01/16] sched/core: Allow sched_setattr() to use the current policy Date: Tue, 15 Jan 2019 10:14:58 +0000 Message-Id: <20190115101513.2822-2-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The sched_setattr() syscall mandates that a policy is always specified. This requires to always know which policy a task will have when attributes are configured and it makes it impossible to add more generic task attributes valid across different scheduling policies. Reading the policy before setting generic tasks attributes is racy since we cannot be sure it is not changed concurrently. Introduce the required support to change generic task attributes without affecting the current task policy. This is done by adding an attribute flag (SCHED_FLAG_KEEP_POLICY) to enforce the usage of the current policy. This is done by extending to the sched_setattr() non-POSIX syscall with the SETPARAM_POLICY policy already used by the sched_setparam() POSIX syscall. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- Changes in v6: Message-ID: <20181107135801.GE9761@hirez.programming.kicks-ass.net> - rename SCHED_FLAG_TUNE_POLICY in SCHED_FLAG_KEEP_POLICY - moved at the beginning of the series --- include/uapi/linux/sched.h | 6 +++++- kernel/sched/core.c | 11 ++++++++++- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 22627f80063e..43832a87016a 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -40,6 +40,8 @@ /* SCHED_ISO: reserved but not implemented yet */ #define SCHED_IDLE 5 #define SCHED_DEADLINE 6 +/* Must be the last entry: used to sanity check attr.policy values */ +#define SCHED_POLICY_MAX 7 /* Can be ORed in to make sure the process is reverted back to SCHED_NORMAL on fork */ #define SCHED_RESET_ON_FORK 0x40000000 @@ -50,9 +52,11 @@ #define SCHED_FLAG_RESET_ON_FORK 0x01 #define SCHED_FLAG_RECLAIM 0x02 #define SCHED_FLAG_DL_OVERRUN 0x04 +#define SCHED_FLAG_KEEP_POLICY 0x08 #define SCHED_FLAG_ALL (SCHED_FLAG_RESET_ON_FORK | \ SCHED_FLAG_RECLAIM | \ - SCHED_FLAG_DL_OVERRUN) + SCHED_FLAG_DL_OVERRUN | \ + SCHED_FLAG_KEEP_POLICY) #endif /* _UAPI_LINUX_SCHED_H */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a674c7db2f29..a68763a4ccae 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4560,8 +4560,17 @@ SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr, if (retval) return retval; - if ((int)attr.sched_policy < 0) + /* + * A valid policy is always required from userspace, unless + * SCHED_FLAG_KEEP_POLICY is set and the current policy + * is enforced for this call. + */ + if (attr.sched_policy >= SCHED_POLICY_MAX && + !(attr.sched_flags & SCHED_FLAG_KEEP_POLICY)) { return -EINVAL; + } + if (attr.sched_flags & SCHED_FLAG_KEEP_POLICY) + attr.sched_policy = SETPARAM_POLICY; rcu_read_lock(); retval = -ESRCH; From patchwork Tue Jan 15 10:14:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764229 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 717A814E5 for ; Tue, 15 Jan 2019 10:17:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5FBD92B3AB for ; Tue, 15 Jan 2019 10:17:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 534E62B89A; Tue, 15 Jan 2019 10:17:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 65FAE2B3AB for ; Tue, 15 Jan 2019 10:17:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728710AbfAOKPf (ORCPT ); Tue, 15 Jan 2019 05:15:35 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46808 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727238AbfAOKPe (ORCPT ); Tue, 15 Jan 2019 05:15:34 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6659815AD; Tue, 15 Jan 2019 02:15:33 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4F9B23F70D; Tue, 15 Jan 2019 02:15:30 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 02/16] sched/core: uclamp: Extend sched_setattr() to support utilization clamping Date: Tue, 15 Jan 2019 10:14:59 +0000 Message-Id: <20190115101513.2822-3-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The SCHED_DEADLINE scheduling class provides an advanced and formal model to define tasks requirements that can translate into proper decisions for both task placements and frequencies selections. Other classes have a more simplified model based on the POSIX concept of priorities. Such a simple priority based model however does not allow to exploit most advanced features of the Linux scheduler like, for example, driving frequencies selection via the schedutil cpufreq governor. However, also for non SCHED_DEADLINE tasks, it's still interesting to define tasks properties to support scheduler decisions. Utilization clamping exposes to user-space a new set of per-task attributes the scheduler can use as hints about the expected/required utilization for a task. This allows to implement a "proactive" per-task frequency control policy, a more advanced policy than the current one based just on "passive" measured task utilization. For example, it's possible to boost interactive tasks (e.g. to get better performance) or cap background tasks (e.g. to be more energy/thermal efficient). Introduce a new API to set utilization clamping values for a specified task by extending sched_setattr(), a syscall which already allows to define task specific properties for different scheduling classes. A new pair of attributes allows to specify a minimum and maximum utilization the scheduler can consider for a task. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- Changes in v6: Message-ID: <20181107120942.GM9781@hirez.programming.kicks-ass.net> - add size check in sched_copy_attr() Others: - typos and changelog cleanups --- include/linux/sched.h | 16 ++++++++ include/uapi/linux/sched.h | 4 +- include/uapi/linux/sched/types.h | 65 +++++++++++++++++++++++++++----- init/Kconfig | 21 +++++++++++ init/init_task.c | 5 +++ kernel/sched/core.c | 43 +++++++++++++++++++++ 6 files changed, 144 insertions(+), 10 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 224666226e87..65199309b866 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -280,6 +280,18 @@ struct vtime { u64 gtime; }; +/** + * enum uclamp_id - Utilization clamp constraints + * @UCLAMP_MIN: Minimum utilization + * @UCLAMP_MAX: Maximum utilization + * @UCLAMP_CNT: Utilization clamp constraints count + */ +enum uclamp_id { + UCLAMP_MIN = 0, + UCLAMP_MAX, + UCLAMP_CNT +}; + struct sched_info { #ifdef CONFIG_SCHED_INFO /* Cumulative counters: */ @@ -648,6 +660,10 @@ struct task_struct { #endif struct sched_dl_entity dl; +#ifdef CONFIG_UCLAMP_TASK + int uclamp[UCLAMP_CNT]; +#endif + #ifdef CONFIG_PREEMPT_NOTIFIERS /* List of struct preempt_notifier: */ struct hlist_head preempt_notifiers; diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 43832a87016a..9ef6dad0f854 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -53,10 +53,12 @@ #define SCHED_FLAG_RECLAIM 0x02 #define SCHED_FLAG_DL_OVERRUN 0x04 #define SCHED_FLAG_KEEP_POLICY 0x08 +#define SCHED_FLAG_UTIL_CLAMP 0x10 #define SCHED_FLAG_ALL (SCHED_FLAG_RESET_ON_FORK | \ SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ - SCHED_FLAG_KEEP_POLICY) + SCHED_FLAG_KEEP_POLICY | \ + SCHED_FLAG_UTIL_CLAMP) #endif /* _UAPI_LINUX_SCHED_H */ diff --git a/include/uapi/linux/sched/types.h b/include/uapi/linux/sched/types.h index 10fbb8031930..01439e07507c 100644 --- a/include/uapi/linux/sched/types.h +++ b/include/uapi/linux/sched/types.h @@ -9,6 +9,7 @@ struct sched_param { }; #define SCHED_ATTR_SIZE_VER0 48 /* sizeof first published struct */ +#define SCHED_ATTR_SIZE_VER1 56 /* add: util_{min,max} */ /* * Extended scheduling parameters data structure. @@ -21,8 +22,33 @@ struct sched_param { * the tasks may be useful for a wide variety of application fields, e.g., * multimedia, streaming, automation and control, and many others. * - * This variant (sched_attr) is meant at describing a so-called - * sporadic time-constrained task. In such model a task is specified by: + * This variant (sched_attr) allows to define additional attributes to + * improve the scheduler knowledge about task requirements. + * + * Scheduling Class Attributes + * =========================== + * + * A subset of sched_attr attributes specifies the + * scheduling policy and relative POSIX attributes: + * + * @size size of the structure, for fwd/bwd compat. + * + * @sched_policy task's scheduling policy + * @sched_nice task's nice value (SCHED_NORMAL/BATCH) + * @sched_priority task's static priority (SCHED_FIFO/RR) + * + * Certain more advanced scheduling features can be controlled by a + * predefined set of flags via the attribute: + * + * @sched_flags for customizing the scheduler behaviour + * + * Sporadic Time-Constrained Tasks Attributes + * ========================================== + * + * A subset of sched_attr attributes allows to describe a so-called + * sporadic time-constrained task. + * + * In such model a task is specified by: * - the activation period or minimum instance inter-arrival time; * - the maximum (or average, depending on the actual scheduling * discipline) computation time of all instances, a.k.a. runtime; @@ -34,14 +60,8 @@ struct sched_param { * than the runtime and must be completed by time instant t equal to * the instance activation time + the deadline. * - * This is reflected by the actual fields of the sched_attr structure: + * This is reflected by the following fields of the sched_attr structure: * - * @size size of the structure, for fwd/bwd compat. - * - * @sched_policy task's scheduling policy - * @sched_flags for customizing the scheduler behaviour - * @sched_nice task's nice value (SCHED_NORMAL/BATCH) - * @sched_priority task's static priority (SCHED_FIFO/RR) * @sched_deadline representative of the task's deadline * @sched_runtime representative of the task's runtime * @sched_period representative of the task's period @@ -53,6 +73,28 @@ struct sched_param { * As of now, the SCHED_DEADLINE policy (sched_dl scheduling class) is the * only user of this new interface. More information about the algorithm * available in the scheduling class file or in Documentation/. + * + * Task Utilization Attributes + * =========================== + * + * A subset of sched_attr attributes allows to specify the utilization + * expected for a task. These attributes allow to inform the scheduler about + * the utilization boundaries within which it should schedule tasks. These + * boundaries are valuable hints to support scheduler decisions on both task + * placement and frequency selection. + * + * @sched_util_min represents the minimum utilization + * @sched_util_max represents the maximum utilization + * + * Utilization is a value in the range [0..SCHED_CAPACITY_SCALE]. It + * represents the percentage of CPU time used by a task when running at the + * maximum frequency on the highest capacity CPU of the system. For example, a + * 20% utilization task is a task running for 2ms every 10ms. + * + * A task with a min utilization value bigger than 0 is more likely scheduled + * on a CPU with a capacity big enough to fit the specified value. + * A task with a max utilization value smaller than 1024 is more likely + * scheduled on a CPU with no more capacity than the specified value. */ struct sched_attr { __u32 size; @@ -70,6 +112,11 @@ struct sched_attr { __u64 sched_runtime; __u64 sched_deadline; __u64 sched_period; + + /* Utilization hints */ + __u32 sched_util_min; + __u32 sched_util_max; + }; #endif /* _UAPI_LINUX_SCHED_TYPES_H */ diff --git a/init/Kconfig b/init/Kconfig index d47cb77a220e..ea7c928a177b 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -640,6 +640,27 @@ config HAVE_UNSTABLE_SCHED_CLOCK config GENERIC_SCHED_CLOCK bool +menu "Scheduler features" + +config UCLAMP_TASK + bool "Enable utilization clamping for RT/FAIR tasks" + depends on CPU_FREQ_GOV_SCHEDUTIL + help + This feature enables the scheduler to track the clamped utilization + of each CPU based on RUNNABLE tasks scheduled on that CPU. + + With this option, the user can specify the min and max CPU + utilization allowed for RUNNABLE tasks. The max utilization defines + the maximum frequency a task should use while the min utilization + defines the minimum frequency it should use. + + Both min and max utilization clamp values are hints to the scheduler, + aiming at improving its frequency selection policy, but they do not + enforce or grant any specific bandwidth for tasks. + + If in doubt, say N. + +endmenu # # For architectures that want to enable the support for NUMA-affine scheduler # balancing logic: diff --git a/init/init_task.c b/init/init_task.c index 5aebe3be4d7c..5bfdcc3fb839 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -91,6 +92,10 @@ struct task_struct init_task #endif #ifdef CONFIG_CGROUP_SCHED .sched_task_group = &root_task_group, +#endif +#ifdef CONFIG_UCLAMP_TASK + .uclamp[UCLAMP_MIN] = 0, + .uclamp[UCLAMP_MAX] = SCHED_CAPACITY_SCALE, #endif .ptraced = LIST_HEAD_INIT(init_task.ptraced), .ptrace_entry = LIST_HEAD_INIT(init_task.ptrace_entry), diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a68763a4ccae..66ff83e115db 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -717,6 +717,28 @@ static void set_load_weight(struct task_struct *p, bool update_load) } } +#ifdef CONFIG_UCLAMP_TASK +static int __setscheduler_uclamp(struct task_struct *p, + const struct sched_attr *attr) +{ + if (attr->sched_util_min > attr->sched_util_max) + return -EINVAL; + if (attr->sched_util_max > SCHED_CAPACITY_SCALE) + return -EINVAL; + + p->uclamp[UCLAMP_MIN] = attr->sched_util_min; + p->uclamp[UCLAMP_MAX] = attr->sched_util_max; + + return 0; +} +#else /* CONFIG_UCLAMP_TASK */ +static inline int __setscheduler_uclamp(struct task_struct *p, + const struct sched_attr *attr) +{ + return -EINVAL; +} +#endif /* CONFIG_UCLAMP_TASK */ + static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags) { if (!(flags & ENQUEUE_NOCLOCK)) @@ -2326,6 +2348,11 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) p->prio = p->normal_prio = __normal_prio(p); set_load_weight(p, false); +#ifdef CONFIG_UCLAMP_TASK + p->uclamp[UCLAMP_MIN] = 0; + p->uclamp[UCLAMP_MAX] = SCHED_CAPACITY_SCALE; +#endif + /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: @@ -4214,6 +4241,13 @@ static int __sched_setscheduler(struct task_struct *p, return retval; } + /* Configure utilization clamps for the task */ + if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) { + retval = __setscheduler_uclamp(p, attr); + if (retval) + return retval; + } + /* * Make sure no PI-waiters arrive (or leave) while we are * changing the priority of the task: @@ -4499,6 +4533,10 @@ static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *a if (ret) return -EFAULT; + if ((attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) && + size < SCHED_ATTR_SIZE_VER1) + return -EINVAL; + /* * XXX: Do we want to be lenient like existing syscalls; or do we want * to be strict and return an error on out-of-bounds values? @@ -4729,6 +4767,11 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr, else attr.sched_nice = task_nice(p); +#ifdef CONFIG_UCLAMP_TASK + attr.sched_util_min = p->uclamp[UCLAMP_MIN]; + attr.sched_util_max = p->uclamp[UCLAMP_MAX]; +#endif + rcu_read_unlock(); retval = sched_read_attr(uattr, &attr, size); From patchwork Tue Jan 15 10:15:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764227 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E7DB71390 for ; Tue, 15 Jan 2019 10:17:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D56A82B3AB for ; Tue, 15 Jan 2019 10:17:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C8C3B2B89A; Tue, 15 Jan 2019 10:17:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 335FE2B3AB for ; Tue, 15 Jan 2019 10:17:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728730AbfAOKPj (ORCPT ); Tue, 15 Jan 2019 05:15:39 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46826 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727238AbfAOKPh (ORCPT ); Tue, 15 Jan 2019 05:15:37 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E179115BE; Tue, 15 Jan 2019 02:15:36 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A73733F70D; Tue, 15 Jan 2019 02:15:33 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 03/16] sched/core: uclamp: Map TASK's clamp values into CPU's clamp buckets Date: Tue, 15 Jan 2019 10:15:00 +0000 Message-Id: <20190115101513.2822-4-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Utilization clamping requires each CPU to know which clamp values are assigned to tasks RUNNABLE on that CPU. A per-CPU array of reference counters can be used where each entry tracks how many RUNNABLE tasks require the same clamp value on each CPU. However, the range of clamp values is too wide to track all the possible values in a per-CPU array. Trade-off clamping precision for run-time and space efficiency using a "bucketization and mapping" mechanism to translate "clamp values" into "clamp buckets", each one representing a range of possible clamp values. While the bucketization allows to use only a minimal set of clamp buckets at run-time, the mapping ensures that the clamp buckets in use are always at the beginning of the per-CPU array. The minimum set of clamp buckets used at run-time depends on their granularity and how many clamp values the target system expects to use. Since on most systems we expect only a few different clamp values, the bucketization and mapping mechanism increases our chances to have all the required data fitting in one cache line. For example, if we have only 20% and 25% clamped tasks, by setting: CONFIG_UCLAMP_BUCKETS_COUNT 20 we allocate 20 clamp buckets with 5% resolution each, however we will use only 2 of them at run-time, since their 5% resolution is enough to always distinguish the clamp values in use, and they will both fit into a single cache line for each CPU. Introduce the "bucketization and mapping" mechanisms which are required for the implement of the per-CPU operations. Add a new "uclamp_enabled" sched_class attribute to mark which class will contribute to clamping the CPU utilization. Move few callbacks around to ensure that the most used callbacks are all in the same cache line along with the new attribute. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- Changes in v6: Message-ID: <20181107144448.GH9761@hirez.programming.kicks-ass.net> - added bucketization support since the beginning to avoid semi-functional code in this patch Message-ID: <20181107141414.GF9761@hirez.programming.kicks-ass.net> - update cmpxchg loops to use "do { } while (cmpxchg(ptr, old, new) != old)" - switch to usage of try_cmpxchg() Message-ID: <20181107145527.GI9761@hirez.programming.kicks-ass.net> - use SCHED_WARN_ON() instead of CONFIG_SCHED_DEBUG guarded blocks - ensure se_count never underflow Message-ID: <20181112000910.GC3038@worktop> - wholesale s/group/bucket/ Message-ID: <20181111164754.GA3038@worktop> - consistently use unary (++/--) operators Message-ID: <20181107142428.GG14309@e110439-lin> - added some better comments for invariant conditions Message-ID: <20181107145612.GJ14309@e110439-lin> - ensure UCLAMP_BUCKETS_COUNT >= 1 Others: - added and make use of the bit_for() macro - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs - documentation review and cleanup --- include/linux/log2.h | 37 ++++++ include/linux/sched.h | 44 ++++++- include/linux/sched/task.h | 6 + include/linux/sched/topology.h | 6 - include/uapi/linux/sched.h | 6 +- init/Kconfig | 32 +++++ init/init_task.c | 4 - kernel/exit.c | 1 + kernel/sched/core.c | 234 ++++++++++++++++++++++++++++++--- kernel/sched/fair.c | 4 + kernel/sched/sched.h | 19 ++- 11 files changed, 362 insertions(+), 31 deletions(-) diff --git a/include/linux/log2.h b/include/linux/log2.h index 2af7f77866d0..e2db25734532 100644 --- a/include/linux/log2.h +++ b/include/linux/log2.h @@ -224,4 +224,41 @@ int __order_base_2(unsigned long n) ilog2((n) - 1) + 1) : \ __order_base_2(n) \ ) + +static inline __attribute__((const)) +int __bits_per(unsigned long n) +{ + if (n < 2) + return 1; + if (is_power_of_2(n)) + return order_base_2(n) + 1; + return order_base_2(n); +} + +/** + * bits_per - calculate the number of bits required for the argument + * @n: parameter + * + * This is constant-capable and can be used for compile time + * initiaizations, e.g bitfields. + * + * The first few values calculated by this routine: + * bf(0) = 1 + * bf(1) = 1 + * bf(2) = 2 + * bf(3) = 2 + * bf(4) = 3 + * ... and so on. + */ +#define bits_per(n) \ +( \ + __builtin_constant_p(n) ? ( \ + ((n) == 0 || (n) == 1) ? 1 : ( \ + ((n) & (n - 1)) == 0 ? \ + ilog2((n) - 1) + 2 : \ + ilog2((n) - 1) + 1 \ + ) \ + ) : \ + __bits_per(n) \ +) #endif /* _LINUX_LOG2_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 65199309b866..4f72f956850f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -323,6 +323,12 @@ struct sched_info { # define SCHED_FIXEDPOINT_SHIFT 10 # define SCHED_FIXEDPOINT_SCALE (1L << SCHED_FIXEDPOINT_SHIFT) +/* + * Increase resolution of cpu_capacity calculations + */ +#define SCHED_CAPACITY_SHIFT SCHED_FIXEDPOINT_SHIFT +#define SCHED_CAPACITY_SCALE (1L << SCHED_CAPACITY_SHIFT) + struct load_weight { unsigned long weight; u32 inv_weight; @@ -580,6 +586,42 @@ struct sched_dl_entity { struct hrtimer inactive_timer; }; +#ifdef CONFIG_UCLAMP_TASK +/* + * Number of utilization clamp buckets. + * + * The first clamp bucket (bucket_id=0) is used to track non clamped tasks, i.e. + * util_{min,max} (0,SCHED_CAPACITY_SCALE). Thus we allocate one more bucket in + * addition to the compile time configured number. + */ +#define UCLAMP_BUCKETS (CONFIG_UCLAMP_BUCKETS_COUNT + 1) + +/* + * Utilization clamp bucket + * @value: clamp value tracked by a clamp bucket + * @bucket_id: the bucket index used by the fast-path + * @mapped: the bucket index is valid + * + * A utilization clamp bucket maps a: + * clamp value (value), i.e. + * util_{min,max} value requested from userspace + * to a: + * clamp bucket index (bucket_id), i.e. + * index of the per-cpu RUNNABLE tasks refcounting array + * + * The mapped bit is set whenever a task has been mapped on a clamp bucket for + * the first time. When this bit is set, any: + * uclamp_bucket_inc() - for a new clamp value + * is matched by a: + * uclamp_bucket_dec() - for the old clamp value + */ +struct uclamp_se { + unsigned int value : bits_per(SCHED_CAPACITY_SCALE); + unsigned int bucket_id : bits_per(UCLAMP_BUCKETS); + unsigned int mapped : 1; +}; +#endif /* CONFIG_UCLAMP_TASK */ + union rcu_special { struct { u8 blocked; @@ -661,7 +703,7 @@ struct task_struct { struct sched_dl_entity dl; #ifdef CONFIG_UCLAMP_TASK - int uclamp[UCLAMP_CNT]; + struct uclamp_se uclamp[UCLAMP_CNT]; #endif #ifdef CONFIG_PREEMPT_NOTIFIERS diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index 44c6f15800ff..c3a71698b6b8 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -70,6 +70,12 @@ static inline void exit_thread(struct task_struct *tsk) #endif extern void do_group_exit(int); +#ifdef CONFIG_UCLAMP_TASK +extern void uclamp_exit_task(struct task_struct *p); +#else +static inline void uclamp_exit_task(struct task_struct *p) { } +#endif /* CONFIG_UCLAMP_TASK */ + extern void exit_files(struct task_struct *); extern void exit_itimers(struct signal_struct *); diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index c31d3a47a47c..04beadac6985 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -6,12 +6,6 @@ #include -/* - * Increase resolution of cpu_capacity calculations - */ -#define SCHED_CAPACITY_SHIFT SCHED_FIXEDPOINT_SHIFT -#define SCHED_CAPACITY_SCALE (1L << SCHED_CAPACITY_SHIFT) - /* * sched-domains (multiprocessor balancing) declarations: */ diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 9ef6dad0f854..36c65da32b31 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -53,7 +53,11 @@ #define SCHED_FLAG_RECLAIM 0x02 #define SCHED_FLAG_DL_OVERRUN 0x04 #define SCHED_FLAG_KEEP_POLICY 0x08 -#define SCHED_FLAG_UTIL_CLAMP 0x10 + +#define SCHED_FLAG_UTIL_CLAMP_MIN 0x10 +#define SCHED_FLAG_UTIL_CLAMP_MAX 0x20 +#define SCHED_FLAG_UTIL_CLAMP (SCHED_FLAG_UTIL_CLAMP_MIN | \ + SCHED_FLAG_UTIL_CLAMP_MAX) #define SCHED_FLAG_ALL (SCHED_FLAG_RESET_ON_FORK | \ SCHED_FLAG_RECLAIM | \ diff --git a/init/Kconfig b/init/Kconfig index ea7c928a177b..e60950ec01c0 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -660,7 +660,39 @@ config UCLAMP_TASK If in doubt, say N. +config UCLAMP_BUCKETS_COUNT + int "Number of supported utilization clamp buckets" + range 5 20 + default 5 + depends on UCLAMP_TASK + help + Defines the number of clamp buckets to use. The range of each bucket + will be SCHED_CAPACITY_SCALE/UCLAMP_BUCKETS_COUNT. The higher the + number of clamp buckets the finer their granularity and the higher + the precision of clamping aggregation and tracking at run-time. + + For example, with the default configuration we will have 5 clamp + buckets tracking 20% utilization each. A 25% boosted tasks will be + refcounted in the [20..39]% bucket and will set the bucket clamp + effective value to 25%. + If a second 30% boosted task should be co-scheduled on the same CPU, + that task will be refcounted in the same bucket of the first task and + it will boost the bucket clamp effective value to 30%. + The clamp effective value of a bucket is reset to its nominal value + (20% in the example above) when there are anymore tasks refcounted in + that bucket. + + An additional boost/capping margin can be added to some tasks. In the + example above the 25% task will be boosted to 30% until it exits the + CPU. If that should be considered not acceptable on certain systems, + it's always possible to reduce the margin by increasing the number of + clamp buckets to trade off used memory for run-time tracking + precision. + + If in doubt, use the default value. + endmenu + # # For architectures that want to enable the support for NUMA-affine scheduler # balancing logic: diff --git a/init/init_task.c b/init/init_task.c index 5bfdcc3fb839..7f77741b6a9b 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -92,10 +92,6 @@ struct task_struct init_task #endif #ifdef CONFIG_CGROUP_SCHED .sched_task_group = &root_task_group, -#endif -#ifdef CONFIG_UCLAMP_TASK - .uclamp[UCLAMP_MIN] = 0, - .uclamp[UCLAMP_MAX] = SCHED_CAPACITY_SCALE, #endif .ptraced = LIST_HEAD_INIT(init_task.ptraced), .ptrace_entry = LIST_HEAD_INIT(init_task.ptrace_entry), diff --git a/kernel/exit.c b/kernel/exit.c index 2d14979577ee..c2a4aa4463be 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -877,6 +877,7 @@ void __noreturn do_exit(long code) sched_autogroup_exit_task(tsk); cgroup_exit(tsk); + uclamp_exit_task(tsk); /* * FIXME: do that only when needed, using sched_exit tracepoint diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 66ff83e115db..3f87898b13a0 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -718,25 +718,221 @@ static void set_load_weight(struct task_struct *p, bool update_load) } #ifdef CONFIG_UCLAMP_TASK +/* + * Serializes updates of utilization clamp values + * + * The (slow-path) user-space triggers utilization clamp value updates which + * can require updates on (fast-path) scheduler's data structures used to + * support enqueue/dequeue operations. + * While the per-CPU rq lock protects fast-path update operations, user-space + * requests are serialized using a mutex to reduce the risk of conflicting + * updates or API abuses. + */ +static DEFINE_MUTEX(uclamp_mutex); + +/* + * Reference count utilization clamp buckets + * @value: the utilization "clamp value" tracked by this clamp bucket + * @se_count: the number of scheduling entities using this "clamp value" + * @data: accessor for value and se_count reading + * @adata: accessor for atomic operations on value and se_count + */ +union uclamp_map { + struct { + unsigned long value : bits_per(SCHED_CAPACITY_SCALE); + unsigned long se_count : BITS_PER_LONG - + bits_per(SCHED_CAPACITY_SCALE); + }; + unsigned long data; + atomic_long_t adata; +}; + +/* + * Map SEs "clamp value" into CPUs "clamp bucket" + * + * Matrix mapping "clamp values" (value) to "clamp buckets" (bucket_id), + * for each "clamp index" (clamp_id) + */ +static union uclamp_map uclamp_maps[UCLAMP_CNT][UCLAMP_BUCKETS]; + +static inline unsigned int uclamp_bucket_value(unsigned int clamp_value) +{ +#define UCLAMP_BUCKET_DELTA (SCHED_CAPACITY_SCALE / CONFIG_UCLAMP_BUCKETS_COUNT) +#define UCLAMP_BUCKET_UPPER (UCLAMP_BUCKET_DELTA * CONFIG_UCLAMP_BUCKETS_COUNT) + + if (clamp_value >= UCLAMP_BUCKET_UPPER) + return SCHED_CAPACITY_SCALE; + + return UCLAMP_BUCKET_DELTA * (clamp_value / UCLAMP_BUCKET_DELTA); +} + +static void uclamp_bucket_dec(unsigned int clamp_id, unsigned int bucket_id) +{ + union uclamp_map *uc_maps = &uclamp_maps[clamp_id][0]; + union uclamp_map uc_map_old, uc_map_new; + + uc_map_old.data = atomic_long_read(&uc_maps[bucket_id].adata); + do { + /* + * Refcounting consistency check. If we release a non + * referenced bucket: refcounting is broken and we warn. + */ + if (unlikely(!uc_map_old.se_count)) { + SCHED_WARN_ON(!uc_map_old.se_count); + return; + } + + uc_map_new = uc_map_old; + uc_map_new.se_count--; + + } while (!atomic_long_try_cmpxchg(&uc_maps[bucket_id].adata, + &uc_map_old.data, uc_map_new.data)); +} + +static void uclamp_bucket_inc(struct uclamp_se *uc_se, unsigned int clamp_id, + unsigned int clamp_value) +{ + union uclamp_map *uc_maps = &uclamp_maps[clamp_id][0]; + unsigned int prev_bucket_id = uc_se->bucket_id; + union uclamp_map uc_map_old, uc_map_new; + unsigned int free_bucket_id; + unsigned int bucket_value; + unsigned int bucket_id; + + bucket_value = uclamp_bucket_value(clamp_value); + + do { + /* Find the bucket_id of an already mapped clamp bucket... */ + free_bucket_id = UCLAMP_BUCKETS; + for (bucket_id = 0; bucket_id < UCLAMP_BUCKETS; ++bucket_id) { + uc_map_old.data = atomic_long_read(&uc_maps[bucket_id].adata); + if (free_bucket_id == UCLAMP_BUCKETS && !uc_map_old.se_count) + free_bucket_id = bucket_id; + if (uc_map_old.value == bucket_value) + break; + } + + /* ... or allocate a new clamp bucket */ + if (bucket_id >= UCLAMP_BUCKETS) { + /* + * A valid clamp bucket must always be available. + * If we cannot find one: refcounting is broken and we + * warn once. The sched_entity will be tracked in the + * fast-path using its previous clamp bucket, or not + * tracked at all if not yet mapped (i.e. it's new). + */ + if (unlikely(free_bucket_id == UCLAMP_BUCKETS)) { + SCHED_WARN_ON(free_bucket_id == UCLAMP_BUCKETS); + return; + } + bucket_id = free_bucket_id; + uc_map_old.data = atomic_long_read(&uc_maps[bucket_id].adata); + } + + uc_map_new.se_count = uc_map_old.se_count + 1; + uc_map_new.value = bucket_value; + + } while (!atomic_long_try_cmpxchg(&uc_maps[bucket_id].adata, + &uc_map_old.data, uc_map_new.data)); + + uc_se->value = clamp_value; + uc_se->bucket_id = bucket_id; + + if (uc_se->mapped) + uclamp_bucket_dec(clamp_id, prev_bucket_id); + + /* + * Task's sched_entity are refcounted in the fast-path only when they + * have got a valid clamp_bucket assigned. + */ + uc_se->mapped = true; +} + static int __setscheduler_uclamp(struct task_struct *p, const struct sched_attr *attr) { - if (attr->sched_util_min > attr->sched_util_max) - return -EINVAL; - if (attr->sched_util_max > SCHED_CAPACITY_SCALE) + unsigned int lower_bound = p->uclamp[UCLAMP_MIN].value; + unsigned int upper_bound = p->uclamp[UCLAMP_MAX].value; + int result = 0; + + if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MIN) + lower_bound = attr->sched_util_min; + + if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MAX) + upper_bound = attr->sched_util_max; + + if (lower_bound > upper_bound || + upper_bound > SCHED_CAPACITY_SCALE) return -EINVAL; - p->uclamp[UCLAMP_MIN] = attr->sched_util_min; - p->uclamp[UCLAMP_MAX] = attr->sched_util_max; + mutex_lock(&uclamp_mutex); + if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MIN) { + uclamp_bucket_inc(&p->uclamp[UCLAMP_MIN], + UCLAMP_MIN, lower_bound); + } + if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MAX) { + uclamp_bucket_inc(&p->uclamp[UCLAMP_MAX], + UCLAMP_MAX, upper_bound); + } + mutex_unlock(&uclamp_mutex); - return 0; + return result; +} + +void uclamp_exit_task(struct task_struct *p) +{ + unsigned int clamp_id; + + if (unlikely(!p->sched_class->uclamp_enabled)) + return; + + for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { + if (!p->uclamp[clamp_id].mapped) + continue; + uclamp_bucket_dec(clamp_id, p->uclamp[clamp_id].bucket_id); + } +} + +static void uclamp_fork(struct task_struct *p, bool reset) +{ + unsigned int clamp_id; + + if (unlikely(!p->sched_class->uclamp_enabled)) + return; + + for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { + unsigned int clamp_value = p->uclamp[clamp_id].value; + + if (unlikely(reset)) + clamp_value = uclamp_none(clamp_id); + + p->uclamp[clamp_id].mapped = false; + uclamp_bucket_inc(&p->uclamp[clamp_id], clamp_id, clamp_value); + } +} + +static void __init init_uclamp(void) +{ + struct uclamp_se *uc_se; + unsigned int clamp_id; + + mutex_init(&uclamp_mutex); + + memset(uclamp_maps, 0, sizeof(uclamp_maps)); + for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { + uc_se = &init_task.uclamp[clamp_id]; + uclamp_bucket_inc(uc_se, clamp_id, uclamp_none(clamp_id)); + } } + #else /* CONFIG_UCLAMP_TASK */ static inline int __setscheduler_uclamp(struct task_struct *p, const struct sched_attr *attr) { return -EINVAL; } +static inline void uclamp_fork(struct task_struct *p, bool reset) { } +static inline void init_uclamp(void) { } #endif /* CONFIG_UCLAMP_TASK */ static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags) @@ -2320,6 +2516,7 @@ static inline void init_schedstats(void) {} int sched_fork(unsigned long clone_flags, struct task_struct *p) { unsigned long flags; + bool reset; __sched_fork(clone_flags, p); /* @@ -2337,7 +2534,8 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) /* * Revert to default priority/policy on fork if requested. */ - if (unlikely(p->sched_reset_on_fork)) { + reset = p->sched_reset_on_fork; + if (unlikely(reset)) { if (task_has_dl_policy(p) || task_has_rt_policy(p)) { p->policy = SCHED_NORMAL; p->static_prio = NICE_TO_PRIO(0); @@ -2348,11 +2546,6 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) p->prio = p->normal_prio = __normal_prio(p); set_load_weight(p, false); -#ifdef CONFIG_UCLAMP_TASK - p->uclamp[UCLAMP_MIN] = 0; - p->uclamp[UCLAMP_MAX] = SCHED_CAPACITY_SCALE; -#endif - /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: @@ -2369,6 +2562,8 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) init_entity_runnable_average(&p->se); + uclamp_fork(p, reset); + /* * The child is not yet in the pid-hash so no cgroup attach races, * and the cgroup is pinned to this child due to cgroup_fork() @@ -4613,10 +4808,15 @@ SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr, rcu_read_lock(); retval = -ESRCH; p = find_process_by_pid(pid); - if (p != NULL) - retval = sched_setattr(p, &attr); + if (likely(p)) + get_task_struct(p); rcu_read_unlock(); + if (likely(p)) { + retval = sched_setattr(p, &attr); + put_task_struct(p); + } + return retval; } @@ -4768,8 +4968,8 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr, attr.sched_nice = task_nice(p); #ifdef CONFIG_UCLAMP_TASK - attr.sched_util_min = p->uclamp[UCLAMP_MIN]; - attr.sched_util_max = p->uclamp[UCLAMP_MAX]; + attr.sched_util_min = p->uclamp[UCLAMP_MIN].value; + attr.sched_util_max = p->uclamp[UCLAMP_MAX].value; #endif rcu_read_unlock(); @@ -6125,6 +6325,8 @@ void __init sched_init(void) psi_init(); + init_uclamp(); + scheduler_running = 1; } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 50aa2aba69bd..5de061b055d2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -10540,6 +10540,10 @@ const struct sched_class fair_sched_class = { #ifdef CONFIG_FAIR_GROUP_SCHED .task_change_group = task_change_group_fair, #endif + +#ifdef CONFIG_UCLAMP_TASK + .uclamp_enabled = 1, +#endif }; #ifdef CONFIG_SCHED_DEBUG diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d04530bf251f..a0b238156161 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1630,10 +1630,12 @@ extern const u32 sched_prio_to_wmult[40]; struct sched_class { const struct sched_class *next; +#ifdef CONFIG_UCLAMP_TASK + int uclamp_enabled; +#endif + void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags); void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags); - void (*yield_task) (struct rq *rq); - bool (*yield_to_task)(struct rq *rq, struct task_struct *p, bool preempt); void (*check_preempt_curr)(struct rq *rq, struct task_struct *p, int flags); @@ -1666,7 +1668,6 @@ struct sched_class { void (*set_curr_task)(struct rq *rq); void (*task_tick)(struct rq *rq, struct task_struct *p, int queued); void (*task_fork)(struct task_struct *p); - void (*task_dead)(struct task_struct *p); /* * The switched_from() call is allowed to drop rq->lock, therefore we @@ -1683,12 +1684,17 @@ struct sched_class { void (*update_curr)(struct rq *rq); + void (*yield_task) (struct rq *rq); + bool (*yield_to_task)(struct rq *rq, struct task_struct *p, bool preempt); + #define TASK_SET_GROUP 0 #define TASK_MOVE_GROUP 1 #ifdef CONFIG_FAIR_GROUP_SCHED void (*task_change_group)(struct task_struct *p, int type); #endif + + void (*task_dead)(struct task_struct *p); }; static inline void put_prev_task(struct rq *rq, struct task_struct *prev) @@ -2203,6 +2209,13 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} #endif /* CONFIG_CPU_FREQ */ +static inline unsigned int uclamp_none(int clamp_id) +{ + if (clamp_id == UCLAMP_MIN) + return 0; + return SCHED_CAPACITY_SCALE; +} + #ifdef arch_scale_freq_capacity # ifndef arch_scale_freq_invariant # define arch_scale_freq_invariant() true From patchwork Tue Jan 15 10:15:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764225 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A8FF14E5 for ; Tue, 15 Jan 2019 10:17:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 38A5F2B3AB for ; Tue, 15 Jan 2019 10:17:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2CB0D2B89A; Tue, 15 Jan 2019 10:17:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2205C2B3AB for ; Tue, 15 Jan 2019 10:17:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728762AbfAOKPm (ORCPT ); Tue, 15 Jan 2019 05:15:42 -0500 Received: from foss.arm.com ([217.140.101.70]:46842 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727238AbfAOKPk (ORCPT ); Tue, 15 Jan 2019 05:15:40 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 45E671596; Tue, 15 Jan 2019 02:15:40 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2E8B83F70D; Tue, 15 Jan 2019 02:15:37 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 04/16] sched/core: uclamp: Add CPU's clamp buckets refcounting Date: Tue, 15 Jan 2019 10:15:01 +0000 Message-Id: <20190115101513.2822-5-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Utilization clamping allows to clamp the CPU's utilization within a [util_min, util_max] range, depending on the set of RUNNABLE tasks on that CPU. Each task references two "clamp buckets" defining its minimum and maximum (util_{min,max}) utilization "clamp values". A CPU's clamp bucket is active if there is at least one RUNNABLE tasks enqueued on that CPU and refcounting that bucket. When a task is {en,de}queued {on,from} a CPU, the set of active clamp buckets on that CPU can change. Since each clamp bucket enforces a different utilization clamp value, when the set of active clamp buckets changes, a new "aggregated" clamp value is computed for that CPU. Clamp values are always MAX aggregated for both util_min and util_max. This ensures that no tasks can affect the performance of other co-scheduled tasks which are more boosted (i.e. with higher util_min clamp) or less capped (i.e. with higher util_max clamp). Each task has a: task_struct::uclamp[clamp_id]::bucket_id to track the "bucket index" of the CPU's clamp bucket it refcounts while enqueued, for each clamp index (clamp_id). Each CPU's rq has a: rq::uclamp[clamp_id]::bucket[bucket_id].tasks to track how many tasks, currently RUNNABLE on that CPU, refcount each clamp bucket (bucket_id) of a clamp index (clamp_id). Each CPU's rq has also a: rq::uclamp[clamp_id]::bucket[bucket_id].value to track the clamp value of each clamp bucket (bucket_id) of a clamp index (clamp_id). The unordered array rq::uclamp::bucket[clamp_id][] is scanned every time we need to find a new MAX aggregated clamp value for a clamp_id. This operation is required only when we dequeue the last task of a clamp bucket tracking the current MAX aggregated clamp value. In these cases, the CPU is either entering IDLE or going to schedule a less boosted or more clamped task. The expected number of different clamp values, configured at build time, is small enough to fit the full unordered array into a single cache line. In most use-cases we expect less than 10 different clamp values for each clamp_id. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- Changes in v6: Message-ID: <20181113151127.GA7681@darkstar> - use SCHED_WARN_ON() instead of CONFIG_SCHED_DEBUG guarded WARN()s - add some better inline documentation to explain per-CPU initializations - add some local variables to use library's max() for aggregation on bitfields attirbutes Message-ID: <20181112000910.GC3038@worktop> - wholesale s/group/bucket/ Message-ID: <20181111164754.GA3038@worktop> - consistently use unary (++/--) operators Others: - updated from rq::uclamp::group[clamp_id][group_id] to rq::uclamp[clamp_id]::bucket[bucket_id] which better matches the layout already used for tasks, i.e. p::uclamp[clamp_id]::value - use {WRITE,READ}_ONCE() for rq's clamp access - update layout of rq::uclamp_cpu to better match that of tasks, i.e now access CPU's clamp buckets as: rq->uclamp[clamp_id]{.bucket[bucket_id].value} which matches: p->uclamp[clamp_id] --- include/linux/sched.h | 6 ++ kernel/sched/core.c | 152 ++++++++++++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 49 ++++++++++++++ 3 files changed, 207 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 4f72f956850f..84294925d006 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -601,6 +601,7 @@ struct sched_dl_entity { * @value: clamp value tracked by a clamp bucket * @bucket_id: the bucket index used by the fast-path * @mapped: the bucket index is valid + * @active: the se is currently refcounted in a CPU's clamp bucket * * A utilization clamp bucket maps a: * clamp value (value), i.e. @@ -614,11 +615,16 @@ struct sched_dl_entity { * uclamp_bucket_inc() - for a new clamp value * is matched by a: * uclamp_bucket_dec() - for the old clamp value + * + * The active bit is set whenever a task has got an effective clamp bucket + * and value assigned, which can be different from the user requested ones. + * This allows to know a task is actually refcounting a CPU's clamp bucket. */ struct uclamp_se { unsigned int value : bits_per(SCHED_CAPACITY_SCALE); unsigned int bucket_id : bits_per(UCLAMP_BUCKETS); unsigned int mapped : 1; + unsigned int active : 1; }; #endif /* CONFIG_UCLAMP_TASK */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3f87898b13a0..190137cd7b3b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -766,6 +766,124 @@ static inline unsigned int uclamp_bucket_value(unsigned int clamp_value) return UCLAMP_BUCKET_DELTA * (clamp_value / UCLAMP_BUCKET_DELTA); } +static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id) +{ + unsigned int max_value = 0; + unsigned int bucket_id; + + for (bucket_id = 0; bucket_id < UCLAMP_BUCKETS; ++bucket_id) { + unsigned int bucket_value; + + if (!rq->uclamp[clamp_id].bucket[bucket_id].tasks) + continue; + + /* Both min and max clamps are MAX aggregated */ + bucket_value = rq->uclamp[clamp_id].bucket[bucket_id].value; + max_value = max(max_value, bucket_value); + if (max_value >= SCHED_CAPACITY_SCALE) + break; + } + WRITE_ONCE(rq->uclamp[clamp_id].value, max_value); +} + +/* + * When a task is enqueued on a CPU's rq, the clamp bucket currently defined by + * the task's uclamp::bucket_id is reference counted on that CPU. This also + * immediately updates the CPU's clamp value if required. + * + * Since tasks know their specific value requested from user-space, we track + * within each bucket the maximum value for tasks refcounted in that bucket. + */ +static inline void uclamp_cpu_inc_id(struct task_struct *p, struct rq *rq, + unsigned int clamp_id) +{ + unsigned int cpu_clamp, grp_clamp, tsk_clamp; + unsigned int bucket_id; + + if (unlikely(!p->uclamp[clamp_id].mapped)) + return; + + bucket_id = p->uclamp[clamp_id].bucket_id; + p->uclamp[clamp_id].active = true; + + rq->uclamp[clamp_id].bucket[bucket_id].tasks++; + + /* CPU's clamp buckets track the max effective clamp value */ + tsk_clamp = p->uclamp[clamp_id].value; + grp_clamp = rq->uclamp[clamp_id].bucket[bucket_id].value; + rq->uclamp[clamp_id].bucket[bucket_id].value = max(grp_clamp, tsk_clamp); + + /* Update CPU clamp value if required */ + cpu_clamp = READ_ONCE(rq->uclamp[clamp_id].value); + WRITE_ONCE(rq->uclamp[clamp_id].value, max(cpu_clamp, tsk_clamp)); +} + +/* + * When a task is dequeued from a CPU's rq, the CPU's clamp bucket reference + * counted by the task is released. If this is the last task reference + * counting the CPU's max active clamp value, then the CPU's clamp value is + * updated. + * Both the tasks reference counter and the CPU's cached clamp values are + * expected to be always valid, if we detect they are not we skip the updates, + * enforce a consistent state and warn. + */ +static inline void uclamp_cpu_dec_id(struct task_struct *p, struct rq *rq, + unsigned int clamp_id) +{ + unsigned int clamp_value; + unsigned int bucket_id; + + if (unlikely(!p->uclamp[clamp_id].mapped)) + return; + + bucket_id = p->uclamp[clamp_id].bucket_id; + p->uclamp[clamp_id].active = false; + + SCHED_WARN_ON(!rq->uclamp[clamp_id].bucket[bucket_id].tasks); + if (likely(rq->uclamp[clamp_id].bucket[bucket_id].tasks)) + rq->uclamp[clamp_id].bucket[bucket_id].tasks--; + + /* We accept to (possibly) overboost tasks still RUNNABLE */ + if (likely(rq->uclamp[clamp_id].bucket[bucket_id].tasks)) + return; + clamp_value = rq->uclamp[clamp_id].bucket[bucket_id].value; + + /* The CPU's clamp value is expected to always track the max */ + SCHED_WARN_ON(clamp_value > rq->uclamp[clamp_id].value); + + if (clamp_value >= READ_ONCE(rq->uclamp[clamp_id].value)) { + /* + * Reset CPU's clamp bucket value to its nominal value whenever + * there are anymore RUNNABLE tasks refcounting it. + */ + rq->uclamp[clamp_id].bucket[bucket_id].value = + uclamp_maps[clamp_id][bucket_id].value; + uclamp_cpu_update(rq, clamp_id); + } +} + +static inline void uclamp_cpu_inc(struct rq *rq, struct task_struct *p) +{ + unsigned int clamp_id; + + if (unlikely(!p->sched_class->uclamp_enabled)) + return; + + for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) + uclamp_cpu_inc_id(p, rq, clamp_id); +} + +static inline void uclamp_cpu_dec(struct rq *rq, struct task_struct *p) +{ + unsigned int clamp_id; + + if (unlikely(!p->sched_class->uclamp_enabled)) + return; + + for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) + uclamp_cpu_dec_id(p, rq, clamp_id); +} + static void uclamp_bucket_dec(unsigned int clamp_id, unsigned int bucket_id) { union uclamp_map *uc_maps = &uclamp_maps[clamp_id][0]; @@ -798,6 +916,7 @@ static void uclamp_bucket_inc(struct uclamp_se *uc_se, unsigned int clamp_id, unsigned int free_bucket_id; unsigned int bucket_value; unsigned int bucket_id; + int cpu; bucket_value = uclamp_bucket_value(clamp_value); @@ -835,6 +954,28 @@ static void uclamp_bucket_inc(struct uclamp_se *uc_se, unsigned int clamp_id, } while (!atomic_long_try_cmpxchg(&uc_maps[bucket_id].adata, &uc_map_old.data, uc_map_new.data)); + /* + * Ensure each CPU tracks the correct value for this clamp bucket. + * This initialization of per-CPU variables is required only when a + * clamp value is requested for the first time from a slow-path. + */ + if (unlikely(!uc_map_old.se_count)) { + for_each_possible_cpu(cpu) { + struct uclamp_cpu *uc_cpu = + &cpu_rq(cpu)->uclamp[clamp_id]; + + /* CPU's tasks count must be 0 for free buckets */ + SCHED_WARN_ON(uc_cpu->bucket[bucket_id].tasks); + if (unlikely(uc_cpu->bucket[bucket_id].tasks)) + uc_cpu->bucket[bucket_id].tasks = 0; + + /* Minimize cache lines invalidation */ + if (uc_cpu->bucket[bucket_id].value == bucket_value) + continue; + uc_cpu->bucket[bucket_id].value = bucket_value; + } + } + uc_se->value = clamp_value; uc_se->bucket_id = bucket_id; @@ -907,6 +1048,7 @@ static void uclamp_fork(struct task_struct *p, bool reset) clamp_value = uclamp_none(clamp_id); p->uclamp[clamp_id].mapped = false; + p->uclamp[clamp_id].active = false; uclamp_bucket_inc(&p->uclamp[clamp_id], clamp_id, clamp_value); } } @@ -915,9 +1057,15 @@ static void __init init_uclamp(void) { struct uclamp_se *uc_se; unsigned int clamp_id; + int cpu; mutex_init(&uclamp_mutex); + for_each_possible_cpu(cpu) { + memset(&cpu_rq(cpu)->uclamp, 0, sizeof(struct uclamp_cpu)); + cpu_rq(cpu)->uclamp[UCLAMP_MAX].value = uclamp_none(UCLAMP_MAX); + } + memset(uclamp_maps, 0, sizeof(uclamp_maps)); for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { uc_se = &init_task.uclamp[clamp_id]; @@ -926,6 +1074,8 @@ static void __init init_uclamp(void) } #else /* CONFIG_UCLAMP_TASK */ +static inline void uclamp_cpu_inc(struct rq *rq, struct task_struct *p) { } +static inline void uclamp_cpu_dec(struct rq *rq, struct task_struct *p) { } static inline int __setscheduler_uclamp(struct task_struct *p, const struct sched_attr *attr) { @@ -945,6 +1095,7 @@ static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags) psi_enqueue(p, flags & ENQUEUE_WAKEUP); } + uclamp_cpu_inc(rq, p); p->sched_class->enqueue_task(rq, p, flags); } @@ -958,6 +1109,7 @@ static inline void dequeue_task(struct rq *rq, struct task_struct *p, int flags) psi_dequeue(p, flags & DEQUEUE_SLEEP); } + uclamp_cpu_dec(rq, p); p->sched_class->dequeue_task(rq, p, flags); } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index a0b238156161..06ff7d890ff6 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -797,6 +797,50 @@ extern void rto_push_irq_work_func(struct irq_work *work); #endif #endif /* CONFIG_SMP */ +#ifdef CONFIG_UCLAMP_TASK +/* + * struct uclamp_bucket - Utilization clamp bucket + * @value: utilization clamp value for tasks on this clamp bucket + * @tasks: number of RUNNABLE tasks on this clamp bucket + * + * Keep track of how many tasks are RUNNABLE for a given utilization + * clamp value. + */ +struct uclamp_bucket { + unsigned long value : bits_per(SCHED_CAPACITY_SCALE); + unsigned long tasks : BITS_PER_LONG - bits_per(SCHED_CAPACITY_SCALE); +}; + +/* + * struct uclamp_cpu - CPU's utilization clamp + * @value: currently active clamp values for a CPU + * @bucket: utilization clamp buckets affecting a CPU + * + * Keep track of RUNNABLE tasks on a CPUs to aggregate their clamp values. + * A clamp value is affecting a CPU where there is at least one task RUNNABLE + * (or actually running) with that value. + * + * We have up to UCLAMP_CNT possible different clamp values, which are + * currently only two: minmum utilization and maximum utilization. + * + * All utilization clamping values are MAX aggregated, since: + * - for util_min: we want to run the CPU at least at the max of the minimum + * utilization required by its currently RUNNABLE tasks. + * - for util_max: we want to allow the CPU to run up to the max of the + * maximum utilization allowed by its currently RUNNABLE tasks. + * + * Since on each system we expect only a limited number of different + * utilization clamp values (CONFIG_UCLAMP_bucketS_COUNT), we use a simple + * array to track the metrics required to compute all the per-CPU utilization + * clamp values. The additional slot is used to track the default clamp + * values, i.e. no min/max clamping at all. + */ +struct uclamp_cpu { + unsigned int value; + struct uclamp_bucket bucket[UCLAMP_BUCKETS]; +}; +#endif /* CONFIG_UCLAMP_TASK */ + /* * This is the main, per-CPU runqueue data structure. * @@ -835,6 +879,11 @@ struct rq { unsigned long nr_load_updates; u64 nr_switches; +#ifdef CONFIG_UCLAMP_TASK + /* Utilization clamp values based on CPU's RUNNABLE tasks */ + struct uclamp_cpu uclamp[UCLAMP_CNT] ____cacheline_aligned; +#endif + struct cfs_rq cfs; struct rt_rq rt; struct dl_rq dl; From patchwork Tue Jan 15 10:15:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764223 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EC97014E5 for ; Tue, 15 Jan 2019 10:17:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DBFDF2B3AB for ; Tue, 15 Jan 2019 10:17:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CFD512B89A; Tue, 15 Jan 2019 10:17:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B5B72B3AB for ; Tue, 15 Jan 2019 10:17:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728791AbfAOKPp (ORCPT ); Tue, 15 Jan 2019 05:15:45 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46854 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727238AbfAOKPo (ORCPT ); Tue, 15 Jan 2019 05:15:44 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9D2531596; Tue, 15 Jan 2019 02:15:43 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 865853F70D; Tue, 15 Jan 2019 02:15:40 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 05/16] sched/core: uclamp: Update CPU's refcount on clamp changes Date: Tue, 15 Jan 2019 10:15:02 +0000 Message-Id: <20190115101513.2822-6-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Utilization clamp values enforced on a CPU by a task can be updated, for example via a sched_setattr() syscall, while a task is RUNNABLE on that CPU. A clamp value change always implies a clamp bucket refcount update to ensure the new constraints are enforced. Hook into uclamp_bucket_get() to trigger a CPU refcount syncup, via uclamp_cpu_{inc,dec}_id(), whenever a task is RUNNABLE. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- Changes in v6: Other: - wholesale s/group/bucket/ - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs - small documentation updates --- kernel/sched/core.c | 48 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 42 insertions(+), 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 190137cd7b3b..67f059ee0a05 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -884,6 +884,38 @@ static inline void uclamp_cpu_dec(struct rq *rq, struct task_struct *p) uclamp_cpu_dec_id(p, rq, clamp_id); } +static inline void +uclamp_task_update_active(struct task_struct *p, unsigned int clamp_id) +{ + struct rq_flags rf; + struct rq *rq; + + /* + * Lock the task and the CPU where the task is (or was) queued. + * + * We might lock the (previous) rq of a !RUNNABLE task, but that's the + * price to pay to safely serialize util_{min,max} updates with + * enqueues, dequeues and migration operations. + * This is the same locking schema used by __set_cpus_allowed_ptr(). + */ + rq = task_rq_lock(p, &rf); + + /* + * Setting the clamp bucket is serialized by task_rq_lock(). + * If the task is not yet RUNNABLE and its task_struct is not + * affecting a valid clamp bucket, the next time it's enqueued, + * it will already see the updated clamp bucket value. + */ + if (!p->uclamp[clamp_id].active) + goto done; + + uclamp_cpu_dec_id(p, rq, clamp_id); + uclamp_cpu_inc_id(p, rq, clamp_id); + +done: + task_rq_unlock(rq, p, &rf); +} + static void uclamp_bucket_dec(unsigned int clamp_id, unsigned int bucket_id) { union uclamp_map *uc_maps = &uclamp_maps[clamp_id][0]; @@ -907,8 +939,8 @@ static void uclamp_bucket_dec(unsigned int clamp_id, unsigned int bucket_id) &uc_map_old.data, uc_map_new.data)); } -static void uclamp_bucket_inc(struct uclamp_se *uc_se, unsigned int clamp_id, - unsigned int clamp_value) +static void uclamp_bucket_inc(struct task_struct *p, struct uclamp_se *uc_se, + unsigned int clamp_id, unsigned int clamp_value) { union uclamp_map *uc_maps = &uclamp_maps[clamp_id][0]; unsigned int prev_bucket_id = uc_se->bucket_id; @@ -979,6 +1011,9 @@ static void uclamp_bucket_inc(struct uclamp_se *uc_se, unsigned int clamp_id, uc_se->value = clamp_value; uc_se->bucket_id = bucket_id; + if (p) + uclamp_task_update_active(p, clamp_id); + if (uc_se->mapped) uclamp_bucket_dec(clamp_id, prev_bucket_id); @@ -1008,11 +1043,11 @@ static int __setscheduler_uclamp(struct task_struct *p, mutex_lock(&uclamp_mutex); if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MIN) { - uclamp_bucket_inc(&p->uclamp[UCLAMP_MIN], + uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MIN], UCLAMP_MIN, lower_bound); } if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MAX) { - uclamp_bucket_inc(&p->uclamp[UCLAMP_MAX], + uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MAX], UCLAMP_MAX, upper_bound); } mutex_unlock(&uclamp_mutex); @@ -1049,7 +1084,8 @@ static void uclamp_fork(struct task_struct *p, bool reset) p->uclamp[clamp_id].mapped = false; p->uclamp[clamp_id].active = false; - uclamp_bucket_inc(&p->uclamp[clamp_id], clamp_id, clamp_value); + uclamp_bucket_inc(NULL, &p->uclamp[clamp_id], + clamp_id, clamp_value); } } @@ -1069,7 +1105,7 @@ static void __init init_uclamp(void) memset(uclamp_maps, 0, sizeof(uclamp_maps)); for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { uc_se = &init_task.uclamp[clamp_id]; - uclamp_bucket_inc(uc_se, clamp_id, uclamp_none(clamp_id)); + uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(clamp_id)); } } From patchwork Tue Jan 15 10:15:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764219 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4851614E5 for ; Tue, 15 Jan 2019 10:16:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 354592B3AB for ; Tue, 15 Jan 2019 10:16:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 272AA2B89A; Tue, 15 Jan 2019 10:16:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9920B2B3AB for ; Tue, 15 Jan 2019 10:16:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728826AbfAOKPu (ORCPT ); Tue, 15 Jan 2019 05:15:50 -0500 Received: from foss.arm.com ([217.140.101.70]:46876 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728801AbfAOKPr (ORCPT ); Tue, 15 Jan 2019 05:15:47 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 000371596; Tue, 15 Jan 2019 02:15:46 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DDA223F70D; Tue, 15 Jan 2019 02:15:43 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 06/16] sched/core: uclamp: Enforce last task UCLAMP_MAX Date: Tue, 15 Jan 2019 10:15:03 +0000 Message-Id: <20190115101513.2822-7-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When the task sleeps, it removes its max utilization clamp from its CPU. However, the blocked utilization on that CPU can be higher than the max clamp value enforced while the task was running. This allows undesired CPU frequency increases while a CPU is idle, for example, when another CPU on the same frequency domain triggers a frequency update, since schedutil can now see the full not clamped blocked utilization of the idle CPU. Fix this by using uclamp_cpu_dec_id(p, rq, UCLAMP_MAX) uclamp_cpu_update(rq, UCLAMP_MAX, clamp_value) to detect when a CPU has no more RUNNABLE clamped tasks and to flag this condition. Don't track any minimum utilization clamps since an idle CPU never requires a minimum frequency. The decay of the blocked utilization is good enough to reduce the CPU frequency. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- Changes in v6: Others: - moved UCLAMP_FLAG_IDLE management into dedicated functions: uclamp_idle_value() and uclamp_idle_reset() - switched from rq::uclamp::flags to rq::uclamp_flags, since now rq::uclamp is a per-clamp_id array --- kernel/sched/core.c | 51 +++++++++++++++++++++++++++++++++++++++++--- kernel/sched/sched.h | 2 ++ 2 files changed, 50 insertions(+), 3 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 67f059ee0a05..b7ac516a70be 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -766,9 +766,45 @@ static inline unsigned int uclamp_bucket_value(unsigned int clamp_value) return UCLAMP_BUCKET_DELTA * (clamp_value / UCLAMP_BUCKET_DELTA); } -static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id) +static inline unsigned int +uclamp_idle_value(struct rq *rq, unsigned int clamp_id, unsigned int clamp_value) +{ + /* + * Avoid blocked utilization pushing up the frequency when we go + * idle (which drops the max-clamp) by retaining the last known + * max-clamp. + */ + if (clamp_id == UCLAMP_MAX) { + rq->uclamp_flags |= UCLAMP_FLAG_IDLE; + return clamp_value; + } + + return uclamp_none(UCLAMP_MIN); +} + +static inline void uclamp_idle_reset(struct rq *rq, unsigned int clamp_id, + unsigned int clamp_value) +{ + /* Reset max-clamp retention only on idle exit */ + if (!(rq->uclamp_flags & UCLAMP_FLAG_IDLE)) + return; + + WRITE_ONCE(rq->uclamp[clamp_id].value, clamp_value); + + /* + * This function is called for both UCLAMP_MIN (before) and UCLAMP_MAX + * (after). The idle flag is reset only the second time, when we know + * that UCLAMP_MIN has been already updated. + */ + if (clamp_id == UCLAMP_MAX) + rq->uclamp_flags &= ~UCLAMP_FLAG_IDLE; +} + +static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id, + unsigned int clamp_value) { unsigned int max_value = 0; + bool buckets_active = false; unsigned int bucket_id; for (bucket_id = 0; bucket_id < UCLAMP_BUCKETS; ++bucket_id) { @@ -776,6 +812,7 @@ static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id) if (!rq->uclamp[clamp_id].bucket[bucket_id].tasks) continue; + buckets_active = true; /* Both min and max clamps are MAX aggregated */ bucket_value = rq->uclamp[clamp_id].bucket[bucket_id].value; @@ -783,6 +820,10 @@ static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id) if (max_value >= SCHED_CAPACITY_SCALE) break; } + + if (unlikely(!buckets_active)) + max_value = uclamp_idle_value(rq, clamp_id, clamp_value); + WRITE_ONCE(rq->uclamp[clamp_id].value, max_value); } @@ -808,8 +849,11 @@ static inline void uclamp_cpu_inc_id(struct task_struct *p, struct rq *rq, rq->uclamp[clamp_id].bucket[bucket_id].tasks++; - /* CPU's clamp buckets track the max effective clamp value */ + /* Reset clamp holds on idle exit */ tsk_clamp = p->uclamp[clamp_id].value; + uclamp_idle_reset(rq, clamp_id, tsk_clamp); + + /* CPU's clamp buckets track the max effective clamp value */ grp_clamp = rq->uclamp[clamp_id].bucket[bucket_id].value; rq->uclamp[clamp_id].bucket[bucket_id].value = max(grp_clamp, tsk_clamp); @@ -858,7 +902,7 @@ static inline void uclamp_cpu_dec_id(struct task_struct *p, struct rq *rq, */ rq->uclamp[clamp_id].bucket[bucket_id].value = uclamp_maps[clamp_id][bucket_id].value; - uclamp_cpu_update(rq, clamp_id); + uclamp_cpu_update(rq, clamp_id, clamp_value); } } @@ -1100,6 +1144,7 @@ static void __init init_uclamp(void) for_each_possible_cpu(cpu) { memset(&cpu_rq(cpu)->uclamp, 0, sizeof(struct uclamp_cpu)); cpu_rq(cpu)->uclamp[UCLAMP_MAX].value = uclamp_none(UCLAMP_MAX); + cpu_rq(cpu)->uclamp_flags = 0; } memset(uclamp_maps, 0, sizeof(uclamp_maps)); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 06ff7d890ff6..b7f3ee8ba164 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -882,6 +882,8 @@ struct rq { #ifdef CONFIG_UCLAMP_TASK /* Utilization clamp values based on CPU's RUNNABLE tasks */ struct uclamp_cpu uclamp[UCLAMP_CNT] ____cacheline_aligned; + unsigned int uclamp_flags; +#define UCLAMP_FLAG_IDLE 0x01 #endif struct cfs_rq cfs; From patchwork Tue Jan 15 10:15:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764221 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 345B61390 for ; Tue, 15 Jan 2019 10:17:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 227552B3AB for ; Tue, 15 Jan 2019 10:17:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 168352B89A; Tue, 15 Jan 2019 10:17:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3F66D2B3AB for ; Tue, 15 Jan 2019 10:17:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728270AbfAOKQ4 (ORCPT ); Tue, 15 Jan 2019 05:16:56 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46900 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728824AbfAOKPv (ORCPT ); Tue, 15 Jan 2019 05:15:51 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 58F521596; Tue, 15 Jan 2019 02:15:50 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4083D3F70D; Tue, 15 Jan 2019 02:15:47 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 07/16] sched/core: uclamp: Add system default clamps Date: Tue, 15 Jan 2019 10:15:04 +0000 Message-Id: <20190115101513.2822-8-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Tasks without a user-defined clamp value are considered not clamped and by default their utilization can have any value in the [0..SCHED_CAPACITY_SCALE] range. Tasks with a user-defined clamp value are allowed to request any value in that range, and we unconditionally enforce the required clamps. However, a "System Management Software" could be interested in limiting the range of clamp values allowed for all tasks. Add a privileged interface to define a system default configuration via: /proc/sys/kernel/sched_uclamp_util_{min,max} which works as an unconditional clamp range restriction for all tasks. If a task specific value is not compliant with the system default range, it will be forced to the corresponding system default value. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- The current restriction could be too aggressive since, for example if a task has a util_min which is higher then the system default max, it will be forced to the system default min unconditionally. Let say we have: Task Clamp: min=30, max=40 System Clamps: min=10, max=20 In principle we should set the task's min=20, since the system allows boosts up to 20%. In the current implementation, however, since the task mins exceed the system max, we just go for task min=10. We should probably better restrict util_min to the maximum system default value, but that would make the code more complex since it required to track a cross clamp_id dependency. Let's keep this as a possible future extension whenever we should really see the need for it. Changes in v6: Others: - wholesale s/group/bucket/ - make use of the bit_for() macro --- include/linux/sched.h | 5 ++ include/linux/sched/sysctl.h | 11 +++ kernel/sched/core.c | 137 ++++++++++++++++++++++++++++++++++- kernel/sysctl.c | 16 ++++ 4 files changed, 166 insertions(+), 3 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 84294925d006..c8f391d1cdc5 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -625,6 +625,11 @@ struct uclamp_se { unsigned int bucket_id : bits_per(UCLAMP_BUCKETS); unsigned int mapped : 1; unsigned int active : 1; + /* Clamp bucket and value actually used by a RUNNABLE task */ + struct { + unsigned int value : bits_per(SCHED_CAPACITY_SCALE); + unsigned int bucket_id : bits_per(UCLAMP_BUCKETS); + } effective; }; #endif /* CONFIG_UCLAMP_TASK */ diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index a9c32daeb9d8..445fb54eaeff 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -56,6 +56,11 @@ int sched_proc_update_handler(struct ctl_table *table, int write, extern unsigned int sysctl_sched_rt_period; extern int sysctl_sched_rt_runtime; +#ifdef CONFIG_UCLAMP_TASK +extern unsigned int sysctl_sched_uclamp_util_min; +extern unsigned int sysctl_sched_uclamp_util_max; +#endif + #ifdef CONFIG_CFS_BANDWIDTH extern unsigned int sysctl_sched_cfs_bandwidth_slice; #endif @@ -75,6 +80,12 @@ extern int sched_rt_handler(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos); +#ifdef CONFIG_UCLAMP_TASK +extern int sched_uclamp_handler(struct ctl_table *table, int write, + void __user *buffer, size_t *lenp, + loff_t *ppos); +#endif + extern int sysctl_numa_balancing(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b7ac516a70be..d1ea5825501a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -731,6 +731,23 @@ static void set_load_weight(struct task_struct *p, bool update_load) static DEFINE_MUTEX(uclamp_mutex); /* + * Minimum utilization for FAIR tasks + * default: 0 + */ +unsigned int sysctl_sched_uclamp_util_min; + +/* + * Maximum utilization for FAIR tasks + * default: 1024 + */ +unsigned int sysctl_sched_uclamp_util_max = SCHED_CAPACITY_SCALE; + +/* + * Tasks specific clamp values are required to be within this range + */ +static struct uclamp_se uclamp_default[UCLAMP_CNT]; + +/** * Reference count utilization clamp buckets * @value: the utilization "clamp value" tracked by this clamp bucket * @se_count: the number of scheduling entities using this "clamp value" @@ -827,6 +844,72 @@ static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id, WRITE_ONCE(rq->uclamp[clamp_id].value, max_value); } +/* + * The effective clamp bucket index of a task depends on, by increasing + * priority: + * - the task specific clamp value, explicitly requested from userspace + * - the system default clamp value, defined by the sysadmin + * + * As a side effect, update the task's effective value: + * task_struct::uclamp::effective::value + * to represent the clamp value of the task effective bucket index. + */ +static inline void +uclamp_effective_get(struct task_struct *p, unsigned int clamp_id, + unsigned int *clamp_value, unsigned int *bucket_id) +{ + /* Task specific clamp value */ + *clamp_value = p->uclamp[clamp_id].value; + *bucket_id = p->uclamp[clamp_id].bucket_id; + + /* System default restriction */ + if (unlikely(*clamp_value < uclamp_default[UCLAMP_MIN].value || + *clamp_value > uclamp_default[UCLAMP_MAX].value)) { + /* Keep it simple: unconditionally enforce system defaults */ + *clamp_value = uclamp_default[clamp_id].value; + *bucket_id = uclamp_default[clamp_id].bucket_id; + } +} + +static inline void +uclamp_effective_assign(struct task_struct *p, unsigned int clamp_id) +{ + unsigned int clamp_value, bucket_id; + + uclamp_effective_get(p, clamp_id, &clamp_value, &bucket_id); + + p->uclamp[clamp_id].effective.value = clamp_value; + p->uclamp[clamp_id].effective.bucket_id = bucket_id; +} + +static inline unsigned int uclamp_effective_bucket_id(struct task_struct *p, + unsigned int clamp_id) +{ + unsigned int clamp_value, bucket_id; + + /* Task currently refcounted: use back-annotate effective value */ + if (p->uclamp[clamp_id].active) + return p->uclamp[clamp_id].effective.bucket_id; + + uclamp_effective_get(p, clamp_id, &clamp_value, &bucket_id); + + return bucket_id; +} + +static unsigned int uclamp_effective_value(struct task_struct *p, + unsigned int clamp_id) +{ + unsigned int clamp_value, bucket_id; + + /* Task currently refcounted: use back-annotate effective value */ + if (p->uclamp[clamp_id].active) + return p->uclamp[clamp_id].effective.value; + + uclamp_effective_get(p, clamp_id, &clamp_value, &bucket_id); + + return clamp_value; +} + /* * When a task is enqueued on a CPU's rq, the clamp bucket currently defined by * the task's uclamp::bucket_id is reference counted on that CPU. This also @@ -843,14 +926,15 @@ static inline void uclamp_cpu_inc_id(struct task_struct *p, struct rq *rq, if (unlikely(!p->uclamp[clamp_id].mapped)) return; + uclamp_effective_assign(p, clamp_id); - bucket_id = p->uclamp[clamp_id].bucket_id; + bucket_id = uclamp_effective_bucket_id(p, clamp_id); p->uclamp[clamp_id].active = true; rq->uclamp[clamp_id].bucket[bucket_id].tasks++; /* Reset clamp holds on idle exit */ - tsk_clamp = p->uclamp[clamp_id].value; + tsk_clamp = uclamp_effective_value(p, clamp_id); uclamp_idle_reset(rq, clamp_id, tsk_clamp); /* CPU's clamp buckets track the max effective clamp value */ @@ -880,7 +964,7 @@ static inline void uclamp_cpu_dec_id(struct task_struct *p, struct rq *rq, if (unlikely(!p->uclamp[clamp_id].mapped)) return; - bucket_id = p->uclamp[clamp_id].bucket_id; + bucket_id = uclamp_effective_bucket_id(p, clamp_id); p->uclamp[clamp_id].active = false; SCHED_WARN_ON(!rq->uclamp[clamp_id].bucket[bucket_id].tasks); @@ -1068,6 +1152,50 @@ static void uclamp_bucket_inc(struct task_struct *p, struct uclamp_se *uc_se, uc_se->mapped = true; } +int sched_uclamp_handler(struct ctl_table *table, int write, + void __user *buffer, size_t *lenp, + loff_t *ppos) +{ + int old_min, old_max; + int result = 0; + + mutex_lock(&uclamp_mutex); + + old_min = sysctl_sched_uclamp_util_min; + old_max = sysctl_sched_uclamp_util_max; + + result = proc_dointvec(table, write, buffer, lenp, ppos); + if (result) + goto undo; + if (!write) + goto done; + + if (sysctl_sched_uclamp_util_min > sysctl_sched_uclamp_util_max || + sysctl_sched_uclamp_util_max > SCHED_CAPACITY_SCALE) { + result = -EINVAL; + goto undo; + } + + if (old_min != sysctl_sched_uclamp_util_min) { + uclamp_bucket_inc(NULL, &uclamp_default[UCLAMP_MIN], + UCLAMP_MIN, sysctl_sched_uclamp_util_min); + } + if (old_max != sysctl_sched_uclamp_util_max) { + uclamp_bucket_inc(NULL, &uclamp_default[UCLAMP_MAX], + UCLAMP_MAX, sysctl_sched_uclamp_util_max); + } + goto done; + +undo: + sysctl_sched_uclamp_util_min = old_min; + sysctl_sched_uclamp_util_max = old_max; + +done: + mutex_unlock(&uclamp_mutex); + + return result; +} + static int __setscheduler_uclamp(struct task_struct *p, const struct sched_attr *attr) { @@ -1151,6 +1279,9 @@ static void __init init_uclamp(void) for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { uc_se = &init_task.uclamp[clamp_id]; uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(clamp_id)); + + uc_se = &uclamp_default[clamp_id]; + uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(clamp_id)); } } diff --git a/kernel/sysctl.c b/kernel/sysctl.c index ba4d9e85feb8..b0fa4a883999 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -446,6 +446,22 @@ static struct ctl_table kern_table[] = { .mode = 0644, .proc_handler = sched_rr_handler, }, +#ifdef CONFIG_UCLAMP_TASK + { + .procname = "sched_uclamp_util_min", + .data = &sysctl_sched_uclamp_util_min, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = sched_uclamp_handler, + }, + { + .procname = "sched_uclamp_util_max", + .data = &sysctl_sched_uclamp_util_max, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = sched_uclamp_handler, + }, +#endif #ifdef CONFIG_SCHED_AUTOGROUP { .procname = "sched_autogroup_enabled", From patchwork Tue Jan 15 10:15:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764201 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6A7D1390 for ; Tue, 15 Jan 2019 10:15:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A39FF2AFB3 for ; Tue, 15 Jan 2019 10:15:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 97AA22B175; Tue, 15 Jan 2019 10:15:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0AEED2AFB3 for ; Tue, 15 Jan 2019 10:15:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728845AbfAOKPy (ORCPT ); Tue, 15 Jan 2019 05:15:54 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46918 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728801AbfAOKPy (ORCPT ); Tue, 15 Jan 2019 05:15:54 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AE83A15AD; Tue, 15 Jan 2019 02:15:53 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 97DE13F70D; Tue, 15 Jan 2019 02:15:50 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 08/16] sched/cpufreq: uclamp: Add utilization clamping for FAIR tasks Date: Tue, 15 Jan 2019 10:15:05 +0000 Message-Id: <20190115101513.2822-9-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Each time a frequency update is required via schedutil, a frequency is selected to (possibly) satisfy the utilization reported by each scheduling class. However, when utilization clamping is in use, the frequency selection should consider userspace utilization clamping hints. This will allow, for example, to: - boost tasks which are directly affecting the user experience by running them at least at a minimum "requested" frequency - cap low priority tasks not directly affecting the user experience by running them only up to a maximum "allowed" frequency These constraints are meant to support a per-task based tuning of the frequency selection thus supporting a fine grained definition of performance boosting vs energy saving strategies in kernel space. Add support to clamp the utilization and IOWait boost of RUNNABLE FAIR tasks within the boundaries defined by their aggregated utilization clamp constraints. Based on the max(min_util, max_util) of each task, max-aggregated the CPU clamp value in a way to give the boosted tasks the performance they need when they happen to be co-scheduled with other capped tasks. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki --- Changes in v6: Message-ID: <20181107113849.GC14309@e110439-lin> - sanity check util_max >= util_min Others: - wholesale s/group/bucket/ - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs --- kernel/sched/cpufreq_schedutil.c | 27 ++++++++++++++++++++++++--- kernel/sched/sched.h | 23 +++++++++++++++++++++++ 2 files changed, 47 insertions(+), 3 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 033ec7c45f13..520ee2b785e7 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -218,8 +218,15 @@ unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs, * CFS tasks and we use the same metric to track the effective * utilization (PELT windows are synchronized) we can directly add them * to obtain the CPU's actual utilization. + * + * CFS utilization can be boosted or capped, depending on utilization + * clamp constraints requested by currently RUNNABLE tasks. + * When there are no CFS RUNNABLE tasks, clamps are released and + * frequency will be gracefully reduced with the utilization decay. */ - util = util_cfs; + util = (type == ENERGY_UTIL) + ? util_cfs + : uclamp_util(rq, util_cfs); util += cpu_util_rt(rq); dl_util = cpu_util_dl(rq); @@ -327,6 +334,7 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags) { bool set_iowait_boost = flags & SCHED_CPUFREQ_IOWAIT; + unsigned int max_boost; /* Reset boost if the CPU appears to have been idle enough */ if (sg_cpu->iowait_boost && @@ -342,11 +350,24 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, return; sg_cpu->iowait_boost_pending = true; + /* + * Boost FAIR tasks only up to the CPU clamped utilization. + * + * Since DL tasks have a much more advanced bandwidth control, it's + * safe to assume that IO boost does not apply to those tasks. + * Instead, since RT tasks are not utilization clamped, we don't want + * to apply clamping on IO boost while there is blocked RT + * utilization. + */ + max_boost = sg_cpu->iowait_boost_max; + if (!cpu_util_rt(cpu_rq(sg_cpu->cpu))) + max_boost = uclamp_util(cpu_rq(sg_cpu->cpu), max_boost); + /* Double the boost at each request */ if (sg_cpu->iowait_boost) { sg_cpu->iowait_boost <<= 1; - if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max) - sg_cpu->iowait_boost = sg_cpu->iowait_boost_max; + if (sg_cpu->iowait_boost > max_boost) + sg_cpu->iowait_boost = max_boost; return; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b7f3ee8ba164..95d62a2a0b44 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2267,6 +2267,29 @@ static inline unsigned int uclamp_none(int clamp_id) return SCHED_CAPACITY_SCALE; } +#ifdef CONFIG_UCLAMP_TASK +static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) +{ + unsigned int min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value); + unsigned int max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value); + + /* + * Since CPU's {min,max}_util clamps are MAX aggregated considering + * RUNNABLE tasks with _different_ clamps, we can end up with an + * invertion, which we can fix at usage time. + */ + if (unlikely(min_util >= max_util)) + return min_util; + + return clamp(util, min_util, max_util); +} +#else /* CONFIG_UCLAMP_TASK */ +static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) +{ + return util; +} +#endif /* CONFIG_UCLAMP_TASK */ + #ifdef arch_scale_freq_capacity # ifndef arch_scale_freq_invariant # define arch_scale_freq_invariant() true From patchwork Tue Jan 15 10:15:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764217 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 51A4314E5 for ; Tue, 15 Jan 2019 10:16:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 40F612B3AB for ; Tue, 15 Jan 2019 10:16:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 354652B89A; Tue, 15 Jan 2019 10:16:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 911342B3AB for ; Tue, 15 Jan 2019 10:16:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728861AbfAOKP6 (ORCPT ); Tue, 15 Jan 2019 05:15:58 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46940 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728801AbfAOKP5 (ORCPT ); Tue, 15 Jan 2019 05:15:57 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 10C401596; Tue, 15 Jan 2019 02:15:57 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EE3B73F70D; Tue, 15 Jan 2019 02:15:53 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 09/16] sched/cpufreq: uclamp: Add utilization clamping for RT tasks Date: Tue, 15 Jan 2019 10:15:06 +0000 Message-Id: <20190115101513.2822-10-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Schedutil enforces a maximum frequency when RT tasks are RUNNABLE. This mandatory policy can be made tunable from userspace to define a max frequency which is still reasonable for the execution of a specific RT workload while being also power/energy friendly. Extend the usage of util_{min,max} to the RT scheduling class. Add uclamp_default_perf, a special set of clamp values to be used for tasks requiring maximum performance, i.e. by default all the non clamped RT tasks. Since utilization clamping applies now to both CFS and RT tasks, schedutil clamps the combined utilization of these two classes. The IOWait boost value is also subject to clamping for RT tasks. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki --- Changes in v6: Others: - wholesale s/group/bucket/ - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs --- kernel/sched/core.c | 20 ++++++++++++++++---- kernel/sched/cpufreq_schedutil.c | 27 +++++++++++++-------------- kernel/sched/rt.c | 4 ++++ 3 files changed, 33 insertions(+), 18 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d1ea5825501a..1ed01f381641 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -746,6 +746,7 @@ unsigned int sysctl_sched_uclamp_util_max = SCHED_CAPACITY_SCALE; * Tasks specific clamp values are required to be within this range */ static struct uclamp_se uclamp_default[UCLAMP_CNT]; +static struct uclamp_se uclamp_default_perf[UCLAMP_CNT]; /** * Reference count utilization clamp buckets @@ -858,16 +859,23 @@ static inline void uclamp_effective_get(struct task_struct *p, unsigned int clamp_id, unsigned int *clamp_value, unsigned int *bucket_id) { + struct uclamp_se *default_clamp; + /* Task specific clamp value */ *clamp_value = p->uclamp[clamp_id].value; *bucket_id = p->uclamp[clamp_id].bucket_id; + /* RT tasks have different default values */ + default_clamp = task_has_rt_policy(p) + ? uclamp_default_perf + : uclamp_default; + /* System default restriction */ - if (unlikely(*clamp_value < uclamp_default[UCLAMP_MIN].value || - *clamp_value > uclamp_default[UCLAMP_MAX].value)) { + if (unlikely(*clamp_value < default_clamp[UCLAMP_MIN].value || + *clamp_value > default_clamp[UCLAMP_MAX].value)) { /* Keep it simple: unconditionally enforce system defaults */ - *clamp_value = uclamp_default[clamp_id].value; - *bucket_id = uclamp_default[clamp_id].bucket_id; + *clamp_value = default_clamp[clamp_id].value; + *bucket_id = default_clamp[clamp_id].bucket_id; } } @@ -1282,6 +1290,10 @@ static void __init init_uclamp(void) uc_se = &uclamp_default[clamp_id]; uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(clamp_id)); + + /* RT tasks by default will go to max frequency */ + uc_se = &uclamp_default_perf[clamp_id]; + uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(UCLAMP_MAX)); } } diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 520ee2b785e7..38a05a4f78cc 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -201,9 +201,6 @@ unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs, unsigned long dl_util, util, irq; struct rq *rq = cpu_rq(cpu); - if (type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) - return max; - /* * Early check to see if IRQ/steal time saturates the CPU, can be * because of inaccuracies in how we track these -- see @@ -219,15 +216,19 @@ unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs, * utilization (PELT windows are synchronized) we can directly add them * to obtain the CPU's actual utilization. * - * CFS utilization can be boosted or capped, depending on utilization - * clamp constraints requested by currently RUNNABLE tasks. + * CFS and RT utilization can be boosted or capped, depending on + * utilization clamp constraints requested by currently RUNNABLE + * tasks. * When there are no CFS RUNNABLE tasks, clamps are released and * frequency will be gracefully reduced with the utilization decay. */ - util = (type == ENERGY_UTIL) - ? util_cfs - : uclamp_util(rq, util_cfs); - util += cpu_util_rt(rq); + util = cpu_util_rt(rq); + if (type == FREQUENCY_UTIL) { + util += cpu_util_cfs(rq); + util = uclamp_util(rq, util); + } else { + util += util_cfs; + } dl_util = cpu_util_dl(rq); @@ -355,13 +356,11 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, * * Since DL tasks have a much more advanced bandwidth control, it's * safe to assume that IO boost does not apply to those tasks. - * Instead, since RT tasks are not utilization clamped, we don't want - * to apply clamping on IO boost while there is blocked RT - * utilization. + * Instead, for CFS and RT tasks we clamp the IO boost max value + * considering the current constraints for the CPU. */ max_boost = sg_cpu->iowait_boost_max; - if (!cpu_util_rt(cpu_rq(sg_cpu->cpu))) - max_boost = uclamp_util(cpu_rq(sg_cpu->cpu), max_boost); + max_boost = uclamp_util(cpu_rq(sg_cpu->cpu), max_boost); /* Double the boost at each request */ if (sg_cpu->iowait_boost) { diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index e4f398ad9e73..614b0bc359cb 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -2400,6 +2400,10 @@ const struct sched_class rt_sched_class = { .switched_to = switched_to_rt, .update_curr = update_curr_rt, + +#ifdef CONFIG_UCLAMP_TASK + .uclamp_enabled = 1, +#endif }; #ifdef CONFIG_RT_GROUP_SCHED From patchwork Tue Jan 15 10:15:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764215 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D4D51390 for ; Tue, 15 Jan 2019 10:16:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D4B92B80C for ; Tue, 15 Jan 2019 10:16:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 311D42B810; Tue, 15 Jan 2019 10:16:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BDE662BA68 for ; Tue, 15 Jan 2019 10:16:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728471AbfAOKQB (ORCPT ); Tue, 15 Jan 2019 05:16:01 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46956 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728878AbfAOKQB (ORCPT ); Tue, 15 Jan 2019 05:16:01 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6730F1596; Tue, 15 Jan 2019 02:16:00 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 507C23F70D; Tue, 15 Jan 2019 02:15:57 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 10/16] sched/core: Add uclamp_util_with() Date: Tue, 15 Jan 2019 10:15:07 +0000 Message-Id: <20190115101513.2822-11-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently uclamp_util() allows to clamp a specified utilization considering the clamp values requested by RUNNABLE tasks in a CPU. Sometimes however, it could be interesting to verify how clamp values will change when a task is going to be running on a given CPU. For example, the Energy Aware Scheduler (EAS) is interested in evaluating and comparing the energy impact of different scheduling decisions. Add uclamp_util_with() which allows to clamp a given utilization by considering the possible impact on CPU clamp values of a specified task. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- kernel/sched/core.c | 4 ++-- kernel/sched/sched.h | 21 ++++++++++++++++++++- 2 files changed, 22 insertions(+), 3 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1ed01f381641..b41db1190d28 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -904,8 +904,8 @@ static inline unsigned int uclamp_effective_bucket_id(struct task_struct *p, return bucket_id; } -static unsigned int uclamp_effective_value(struct task_struct *p, - unsigned int clamp_id) +unsigned int uclamp_effective_value(struct task_struct *p, + unsigned int clamp_id) { unsigned int clamp_value, bucket_id; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 95d62a2a0b44..b7ce3023d023 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2268,11 +2268,20 @@ static inline unsigned int uclamp_none(int clamp_id) } #ifdef CONFIG_UCLAMP_TASK -static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) +unsigned int uclamp_effective_value(struct task_struct *p, unsigned int clamp_id); + +static __always_inline +unsigned int uclamp_util_with(struct rq *rq, unsigned int util, + struct task_struct *p) { unsigned int min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value); unsigned int max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value); + if (p) { + min_util = max(min_util, uclamp_effective_value(p, UCLAMP_MIN)); + max_util = max(max_util, uclamp_effective_value(p, UCLAMP_MAX)); + } + /* * Since CPU's {min,max}_util clamps are MAX aggregated considering * RUNNABLE tasks with _different_ clamps, we can end up with an @@ -2283,7 +2292,17 @@ static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) return clamp(util, min_util, max_util); } + +static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) +{ + return uclamp_util_with(rq, util, NULL); +} #else /* CONFIG_UCLAMP_TASK */ +static inline unsigned int uclamp_util_with(struct rq *rq, unsigned int util, + struct task_struct *p) +{ + return util; +} static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) { return util; From patchwork Tue Jan 15 10:15:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764213 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3084314E5 for ; Tue, 15 Jan 2019 10:16:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E1A82B3AB for ; Tue, 15 Jan 2019 10:16:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0ECBD2B89A; Tue, 15 Jan 2019 10:16:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DDB532B3AB for ; Tue, 15 Jan 2019 10:16:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728889AbfAOKQE (ORCPT ); Tue, 15 Jan 2019 05:16:04 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46970 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728884AbfAOKQE (ORCPT ); Tue, 15 Jan 2019 05:16:04 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BDAAF1596; Tue, 15 Jan 2019 02:16:03 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A71663F70D; Tue, 15 Jan 2019 02:16:00 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 11/16] sched/fair: Add uclamp support to energy_compute() Date: Tue, 15 Jan 2019 10:15:08 +0000 Message-Id: <20190115101513.2822-12-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The Energy Aware Scheduler (AES) estimates the energy impact of waking up a task on a given CPU. This estimation is based on: a) an (active) power consumptions defined for each CPU frequency b) an estimation of which frequency will be used on each CPU c) an estimation of the busy time (utilization) of each CPU Utilization clamping can affect both b) and c) estimations. A CPU is expected to run: - on an higher than required frequency, but for a shorter time, in case its estimated utilization will be smaller then the minimum utilization enforced by uclamp - on a smaller than required frequency, but for a longer time, in case its estimated utilization is bigger then the maximum utilization enforced by uclamp While effects on busy time for both boosted/capped tasks are already considered by compute_energy(), clamping effects on frequency selection are currently ignored by that function. Fix it by considering how CPU clamp values will be affected by a task waking up and being RUNNABLE on that CPU. Do that by refactoring schedutil_freq_util() to take an additional task_struct* which allows EAS to evaluate the impact on clamp values of a task being eventually queued in a CPU. Clamp values are applied to the RT+CFS utilization only when a FREQUENCY_UTIL is required by compute_energy(). Since we are at that: - rename schedutil_freq_util() into schedutil_cpu_util(), since it's not only used for frequency selection. - use "unsigned int" instead of "unsigned long" whenever the tracked utilization value is not expected to overflow 32bit. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki --- kernel/sched/cpufreq_schedutil.c | 26 +++++++++++----------- kernel/sched/fair.c | 37 ++++++++++++++++++++++++++------ kernel/sched/sched.h | 19 +++++----------- 3 files changed, 48 insertions(+), 34 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 38a05a4f78cc..4e02b419c482 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -195,10 +195,11 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, * based on the task model parameters and gives the minimal utilization * required to meet deadlines. */ -unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs, - unsigned long max, enum schedutil_type type) +unsigned int schedutil_cpu_util(int cpu, unsigned int util_cfs, + unsigned int max, enum schedutil_type type, + struct task_struct *p) { - unsigned long dl_util, util, irq; + unsigned int dl_util, util, irq; struct rq *rq = cpu_rq(cpu); /* @@ -222,13 +223,9 @@ unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs, * When there are no CFS RUNNABLE tasks, clamps are released and * frequency will be gracefully reduced with the utilization decay. */ - util = cpu_util_rt(rq); - if (type == FREQUENCY_UTIL) { - util += cpu_util_cfs(rq); - util = uclamp_util(rq, util); - } else { - util += util_cfs; - } + util = cpu_util_rt(rq) + util_cfs; + if (type == FREQUENCY_UTIL) + util = uclamp_util_with(rq, util, p); dl_util = cpu_util_dl(rq); @@ -282,13 +279,14 @@ unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs, static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu) { struct rq *rq = cpu_rq(sg_cpu->cpu); - unsigned long util = cpu_util_cfs(rq); - unsigned long max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu); + unsigned int util_cfs = cpu_util_cfs(rq); + unsigned int cpu_cap = arch_scale_cpu_capacity(NULL, sg_cpu->cpu); - sg_cpu->max = max; + sg_cpu->max = cpu_cap; sg_cpu->bw_dl = cpu_bw_dl(rq); - return schedutil_freq_util(sg_cpu->cpu, util, max, FREQUENCY_UTIL); + return schedutil_cpu_util(sg_cpu->cpu, util_cfs, cpu_cap, + FREQUENCY_UTIL, NULL); } /** diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5de061b055d2..7f8ca3b02dec 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6424,11 +6424,20 @@ static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu) static long compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) { - long util, max_util, sum_util, energy = 0; + unsigned int max_util, cfs_util, cpu_util, cpu_cap; + unsigned long sum_util, energy = 0; int cpu; for (; pd; pd = pd->next) { + struct cpumask *pd_mask = perf_domain_span(pd); + + /* + * The energy model mandate all the CPUs of a performance + * domain have the same capacity. + */ + cpu_cap = arch_scale_cpu_capacity(NULL, cpumask_first(pd_mask)); max_util = sum_util = 0; + /* * The capacity state of CPUs of the current rd can be driven by * CPUs of another rd if they belong to the same performance @@ -6439,11 +6448,27 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) * it will not appear in its pd list and will not be accounted * by compute_energy(). */ - for_each_cpu_and(cpu, perf_domain_span(pd), cpu_online_mask) { - util = cpu_util_next(cpu, p, dst_cpu); - util = schedutil_energy_util(cpu, util); - max_util = max(util, max_util); - sum_util += util; + for_each_cpu_and(cpu, pd_mask, cpu_online_mask) { + cfs_util = cpu_util_next(cpu, p, dst_cpu); + + /* + * Busy time computation: utilization clamping is not + * required since the ratio (sum_util / cpu_capacity) + * is already enough to scale the EM reported power + * consumption at the (eventually clamped) cpu_capacity. + */ + sum_util += schedutil_cpu_util(cpu, cfs_util, cpu_cap, + ENERGY_UTIL, NULL); + + /* + * Performance domain frequency: utilization clamping + * must be considered since it affects the selection + * of the performance domain frequency. + */ + cpu_util = schedutil_cpu_util(cpu, cfs_util, cpu_cap, + FREQUENCY_UTIL, + cpu == dst_cpu ? p : NULL); + max_util = max(max_util, cpu_util); } energy += em_pd_energy(pd->em_pd, max_util, sum_util); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b7ce3023d023..a70f4bf66285 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2317,7 +2317,6 @@ static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) # define arch_scale_freq_invariant() false #endif -#ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL /** * enum schedutil_type - CPU utilization type * @FREQUENCY_UTIL: Utilization used to select frequency @@ -2333,15 +2332,10 @@ enum schedutil_type { ENERGY_UTIL, }; -unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs, - unsigned long max, enum schedutil_type type); - -static inline unsigned long schedutil_energy_util(int cpu, unsigned long cfs) -{ - unsigned long max = arch_scale_cpu_capacity(NULL, cpu); - - return schedutil_freq_util(cpu, cfs, max, ENERGY_UTIL); -} +#ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL +unsigned int schedutil_cpu_util(int cpu, unsigned int util_cfs, + unsigned int max, enum schedutil_type type, + struct task_struct *p); static inline unsigned long cpu_bw_dl(struct rq *rq) { @@ -2370,10 +2364,7 @@ static inline unsigned long cpu_util_rt(struct rq *rq) return READ_ONCE(rq->avg_rt.util_avg); } #else /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */ -static inline unsigned long schedutil_energy_util(int cpu, unsigned long cfs) -{ - return cfs; -} +#define schedutil_cpu_util(cpu, util_cfs, max, type, p) 0 #endif #ifdef CONFIG_HAVE_SCHED_AVG_IRQ From patchwork Tue Jan 15 10:15:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764211 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8D291390 for ; Tue, 15 Jan 2019 10:16:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96AED2AFB3 for ; Tue, 15 Jan 2019 10:16:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8984A2B3AB; Tue, 15 Jan 2019 10:16:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 89F382AFB3 for ; Tue, 15 Jan 2019 10:16:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728629AbfAOKQg (ORCPT ); Tue, 15 Jan 2019 05:16:36 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46990 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728911AbfAOKQH (ORCPT ); Tue, 15 Jan 2019 05:16:07 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2085D15AD; Tue, 15 Jan 2019 02:16:07 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 09EA23F7B8; Tue, 15 Jan 2019 02:16:03 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 12/16] sched/core: uclamp: Extend CPU's cgroup controller Date: Tue, 15 Jan 2019 10:15:09 +0000 Message-Id: <20190115101513.2822-13-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The cgroup CPU bandwidth controller allows to assign a specified (maximum) bandwidth to the tasks of a group. However this bandwidth is defined and enforced only on a temporal base, without considering the actual frequency a CPU is running on. Thus, the amount of computation completed by a task within an allocated bandwidth can be very different depending on the actual frequency the CPU is running that task. The amount of computation can be affected also by the specific CPU a task is running on, especially when running on asymmetric capacity systems like Arm's big.LITTLE. With the availability of schedutil, the scheduler is now able to drive frequency selections based on actual task utilization. Moreover, the utilization clamping support provides a mechanism to bias the frequency selection operated by schedutil depending on constraints assigned to the tasks currently RUNNABLE on a CPU. Giving the mechanisms described above, it is now possible to extend the cpu controller to specify the minimum (or maximum) utilization which should be considered for tasks RUNNABLE on a cpu. This makes it possible to better defined the actual computational power assigned to task groups, thus improving the cgroup CPU bandwidth controller which is currently based just on time constraints. Extend the CPU controller with a couple of new attributes util.{min,max} which allows to enforce utilization boosting and capping for all the tasks in a group. Specifically: - util.min: defines the minimum utilization which should be considered i.e. the RUNNABLE tasks of this group will run at least at a minimum frequency which corresponds to the min_util utilization - util.max: defines the maximum utilization which should be considered i.e. the RUNNABLE tasks of this group will run up to a maximum frequency which corresponds to the max_util utilization These attributes: a) are available only for non-root nodes, both on default and legacy hierarchies, while system wide clamps are defined by a generic interface which does not depends on cgroups b) do not enforce any constraints and/or dependencies between the parent and its child nodes, thus relying: - on permission settings defined by the system management software, to define if subgroups can configure their clamp values - on the delegation model, to ensure that effective clamps are updated to consider both subgroup requests and parent group constraints c) have higher priority than task-specific clamps, defined via sched_setattr(), thus allowing to control and restrict task requests This patch provides the basic support to expose the two new attributes and to validate their run-time updates, while we do not (yet) actually allocated clamp buckets. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo --- NOTEs: 1) The delegation model described above is provided in one of the following patches of this series. 2) Utilization clamping constraints are useful not only to bias frequency selection, when a task is running, but also to better support certain scheduler decisions regarding task placement. For example, on asymmetric capacity systems, a utilization clamp value can be conveniently used to enforce important interactive tasks on more capable CPUs or to run low priority and background tasks on more energy efficient CPUs. The ultimate goal of utilization clamping is thus to enable: - boosting: by selecting an higher capacity CPU and/or higher execution frequency for small tasks which are affecting the user interactive experience. - capping: by selecting more energy efficiency CPUs or lower execution frequency, for big tasks which are mainly related to background activities, and thus without a direct impact on the user experience. Thus, a proper extension of the cpu controller with utilization clamping support will make this controller even more suitable for integration with advanced system management software (e.g. Android). Indeed, an informed user-space can provide rich information hints to the scheduler regarding the tasks it's going to schedule. The bits related to task placement biasing are left for a further extension once the basic support introduced by this series will be merged. Anyway they will not affect the integration with cgroups. Changes in v6: Others: - wholesale s/group/bucket/ - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs --- Documentation/admin-guide/cgroup-v2.rst | 25 +++++ init/Kconfig | 22 ++++ kernel/sched/core.c | 131 ++++++++++++++++++++++++ kernel/sched/sched.h | 5 + 4 files changed, 183 insertions(+) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 7bf3f129c68b..a059aaf7cce6 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -909,6 +909,12 @@ controller implements weight and absolute bandwidth limit models for normal scheduling policy and absolute bandwidth allocation model for realtime scheduling policy. +Cycles distribution is based, by default, on a temporal base and it +does not account for the frequency at which tasks are executed. +The (optional) utilization clamping support allows to enforce a minimum +bandwidth, which should always be provided by a CPU, and a maximum bandwidth, +which should never be exceeded by a CPU. + WARNING: cgroup2 doesn't yet support control of realtime processes and the cpu controller can only be enabled when all RT processes are in the root cgroup. Be aware that system management software may already @@ -974,6 +980,25 @@ All time durations are in microseconds. Shows pressure stall information for CPU. See Documentation/accounting/psi.txt for details. + cpu.util.min + A read-write single value file which exists on non-root cgroups. + The default is "0", i.e. no utilization boosting. + + The minimum utilization in the range [0, 1024]. + + This interface allows reading and setting minimum utilization clamp + values similar to the sched_setattr(2). This minimum utilization + value is used to clamp the task specific minimum utilization clamp. + + cpu.util.max + A read-write single value file which exists on non-root cgroups. + The default is "1024". i.e. no utilization capping + + The maximum utilization in the range [0, 1024]. + + This interface allows reading and setting maximum utilization clamp + values similar to the sched_setattr(2). This maximum utilization + value is used to clamp the task specific maximum utilization clamp. Memory ------ diff --git a/init/Kconfig b/init/Kconfig index e60950ec01c0..94abf368bd52 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -866,6 +866,28 @@ config RT_GROUP_SCHED endif #CGROUP_SCHED +config UCLAMP_TASK_GROUP + bool "Utilization clamping per group of tasks" + depends on CGROUP_SCHED + depends on UCLAMP_TASK + default n + help + This feature enables the scheduler to track the clamped utilization + of each CPU based on RUNNABLE tasks currently scheduled on that CPU. + + When this option is enabled, the user can specify a min and max + CPU bandwidth which is allowed for each single task in a group. + The max bandwidth allows to clamp the maximum frequency a task + can use, while the min bandwidth allows to define a minimum + frequency a task will always use. + + When task group based utilization clamping is enabled, an eventually + specified task-specific clamp value is constrained by the cgroup + specified clamp value. Both minimum and maximum task clamping cannot + be bigger than the corresponding clamping defined at task group level. + + If in doubt, say N. + config CGROUP_PIDS bool "PIDs controller" help diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b41db1190d28..29ae83fb9786 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1294,6 +1294,13 @@ static void __init init_uclamp(void) /* RT tasks by default will go to max frequency */ uc_se = &uclamp_default_perf[clamp_id]; uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(UCLAMP_MAX)); + +#ifdef CONFIG_UCLAMP_TASK_GROUP + /* Init root TG's clamp bucket */ + uc_se = &root_task_group.uclamp[clamp_id]; + uc_se->value = uclamp_none(clamp_id); + uc_se->bucket_id = 0; +#endif } } @@ -6872,6 +6879,23 @@ void ia64_set_curr_task(int cpu, struct task_struct *p) /* task_group_lock serializes the addition/removal of task groups */ static DEFINE_SPINLOCK(task_group_lock); +static inline int alloc_uclamp_sched_group(struct task_group *tg, + struct task_group *parent) +{ +#ifdef CONFIG_UCLAMP_TASK_GROUP + int clamp_id; + + for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { + tg->uclamp[clamp_id].value = + parent->uclamp[clamp_id].value; + tg->uclamp[clamp_id].bucket_id = + parent->uclamp[clamp_id].bucket_id; + } +#endif + + return 1; +} + static void sched_free_group(struct task_group *tg) { free_fair_sched_group(tg); @@ -6895,6 +6919,9 @@ struct task_group *sched_create_group(struct task_group *parent) if (!alloc_rt_sched_group(tg, parent)) goto err; + if (!alloc_uclamp_sched_group(tg, parent)) + goto err; + return tg; err: @@ -7115,6 +7142,84 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset) sched_move_task(task); } +#ifdef CONFIG_UCLAMP_TASK_GROUP +static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, + struct cftype *cftype, u64 min_value) +{ + struct task_group *tg; + int ret = 0; + + if (min_value > SCHED_CAPACITY_SCALE) + return -ERANGE; + + rcu_read_lock(); + + tg = css_tg(css); + if (tg->uclamp[UCLAMP_MIN].value == min_value) + goto out; + if (tg->uclamp[UCLAMP_MAX].value < min_value) { + ret = -EINVAL; + goto out; + } + +out: + rcu_read_unlock(); + + return ret; +} + +static int cpu_util_max_write_u64(struct cgroup_subsys_state *css, + struct cftype *cftype, u64 max_value) +{ + struct task_group *tg; + int ret = 0; + + if (max_value > SCHED_CAPACITY_SCALE) + return -ERANGE; + + rcu_read_lock(); + + tg = css_tg(css); + if (tg->uclamp[UCLAMP_MAX].value == max_value) + goto out; + if (tg->uclamp[UCLAMP_MIN].value > max_value) { + ret = -EINVAL; + goto out; + } + +out: + rcu_read_unlock(); + + return ret; +} + +static inline u64 cpu_uclamp_read(struct cgroup_subsys_state *css, + enum uclamp_id clamp_id) +{ + struct task_group *tg; + u64 util_clamp; + + rcu_read_lock(); + tg = css_tg(css); + util_clamp = tg->uclamp[clamp_id].value; + rcu_read_unlock(); + + return util_clamp; +} + +static u64 cpu_util_min_read_u64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return cpu_uclamp_read(css, UCLAMP_MIN); +} + +static u64 cpu_util_max_read_u64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return cpu_uclamp_read(css, UCLAMP_MAX); +} +#endif /* CONFIG_UCLAMP_TASK_GROUP */ + #ifdef CONFIG_FAIR_GROUP_SCHED static int cpu_shares_write_u64(struct cgroup_subsys_state *css, struct cftype *cftype, u64 shareval) @@ -7452,6 +7557,18 @@ static struct cftype cpu_legacy_files[] = { .read_u64 = cpu_rt_period_read_uint, .write_u64 = cpu_rt_period_write_uint, }, +#endif +#ifdef CONFIG_UCLAMP_TASK_GROUP + { + .name = "util.min", + .read_u64 = cpu_util_min_read_u64, + .write_u64 = cpu_util_min_write_u64, + }, + { + .name = "util.max", + .read_u64 = cpu_util_max_read_u64, + .write_u64 = cpu_util_max_write_u64, + }, #endif { } /* Terminate */ }; @@ -7619,6 +7736,20 @@ static struct cftype cpu_files[] = { .seq_show = cpu_max_show, .write = cpu_max_write, }, +#endif +#ifdef CONFIG_UCLAMP_TASK_GROUP + { + .name = "util.min", + .flags = CFTYPE_NOT_ON_ROOT, + .read_u64 = cpu_util_min_read_u64, + .write_u64 = cpu_util_min_write_u64, + }, + { + .name = "util.max", + .flags = CFTYPE_NOT_ON_ROOT, + .read_u64 = cpu_util_max_read_u64, + .write_u64 = cpu_util_max_write_u64, + }, #endif { } /* terminate */ }; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index a70f4bf66285..eca7d1a6cd43 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -399,6 +399,11 @@ struct task_group { #endif struct cfs_bandwidth cfs_bandwidth; + +#ifdef CONFIG_UCLAMP_TASK_GROUP + struct uclamp_se uclamp[UCLAMP_CNT]; +#endif + }; #ifdef CONFIG_FAIR_GROUP_SCHED From patchwork Tue Jan 15 10:15:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764209 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37D3914E5 for ; Tue, 15 Jan 2019 10:16:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 24E2E2AFB3 for ; Tue, 15 Jan 2019 10:16:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 15BEE2B175; Tue, 15 Jan 2019 10:16:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 512782AFB3 for ; Tue, 15 Jan 2019 10:16:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728913AbfAOKQM (ORCPT ); Tue, 15 Jan 2019 05:16:12 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:47008 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728884AbfAOKQL (ORCPT ); Tue, 15 Jan 2019 05:16:11 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 779961596; Tue, 15 Jan 2019 02:16:10 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6093F3F70D; Tue, 15 Jan 2019 02:16:07 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 13/16] sched/core: uclamp: Propagate parent clamps Date: Tue, 15 Jan 2019 10:15:10 +0000 Message-Id: <20190115101513.2822-14-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In order to properly support hierarchical resources control, the cgroup delegation model requires that attribute writes from a child group never fail but still are (potentially) constrained based on parent's assigned resources. This requires to properly propagate and aggregate parent attributes down to its descendants. Let's implement this mechanism by adding a new "effective" clamp value for each task group. The effective clamp value is defined as the smaller value between the clamp value of a group and the effective clamp value of its parent. This is the actual clamp value enforced on tasks in a task group. Since it can be interesting for userspace, e.g. system management software, to know exactly what the currently propagated/enforced configuration is, the effective clamp values are exposed to user-space by means of a new pair of read-only attributes cpu.util.{min,max}.effective. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo --- Changes in v6: Others: - wholesale s/group/bucket/ - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs --- Documentation/admin-guide/cgroup-v2.rst | 25 ++++++- include/linux/sched.h | 10 ++- kernel/sched/core.c | 89 +++++++++++++++++++++++-- 3 files changed, 117 insertions(+), 7 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index a059aaf7cce6..7aad2435e961 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -984,22 +984,43 @@ All time durations are in microseconds. A read-write single value file which exists on non-root cgroups. The default is "0", i.e. no utilization boosting. - The minimum utilization in the range [0, 1024]. + The requested minimum utilization in the range [0, 1024]. This interface allows reading and setting minimum utilization clamp values similar to the sched_setattr(2). This minimum utilization value is used to clamp the task specific minimum utilization clamp. + cpu.util.min.effective + A read-only single value file which exists on non-root cgroups and + reports minimum utilization clamp value currently enforced on a task + group. + + The actual minimum utilization in the range [0, 1024]. + + This value can be lower then cpu.util.min in case a parent cgroup + allows only smaller minimum utilization values. + cpu.util.max A read-write single value file which exists on non-root cgroups. The default is "1024". i.e. no utilization capping - The maximum utilization in the range [0, 1024]. + The requested maximum utilization in the range [0, 1024]. This interface allows reading and setting maximum utilization clamp values similar to the sched_setattr(2). This maximum utilization value is used to clamp the task specific maximum utilization clamp. + cpu.util.max.effective + A read-only single value file which exists on non-root cgroups and + reports maximum utilization clamp value currently enforced on a task + group. + + The actual maximum utilization in the range [0, 1024]. + + This value can be lower then cpu.util.max in case a parent cgroup + is enforcing a more restrictive clamping on max utilization. + + Memory ------ diff --git a/include/linux/sched.h b/include/linux/sched.h index c8f391d1cdc5..05d286524d70 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -625,7 +625,15 @@ struct uclamp_se { unsigned int bucket_id : bits_per(UCLAMP_BUCKETS); unsigned int mapped : 1; unsigned int active : 1; - /* Clamp bucket and value actually used by a RUNNABLE task */ + /* + * Clamp bucket and value actually used by a scheduling entity, + * i.e. a (RUNNABLE) task or a task group. + * For task groups, this is the value (possibly) enforced by a + * parent task group. + * For a task, this is the value (possibly) enforced by the + * task group the task is currently part of or by the system + * default clamp values, whichever is the most restrictive. + */ struct { unsigned int value : bits_per(SCHED_CAPACITY_SCALE); unsigned int bucket_id : bits_per(UCLAMP_BUCKETS); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 29ae83fb9786..ddbd591b305c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1300,6 +1300,7 @@ static void __init init_uclamp(void) uc_se = &root_task_group.uclamp[clamp_id]; uc_se->value = uclamp_none(clamp_id); uc_se->bucket_id = 0; + uc_se->effective.value = uclamp_none(clamp_id); #endif } } @@ -6890,6 +6891,8 @@ static inline int alloc_uclamp_sched_group(struct task_group *tg, parent->uclamp[clamp_id].value; tg->uclamp[clamp_id].bucket_id = parent->uclamp[clamp_id].bucket_id; + tg->uclamp[clamp_id].effective.value = + parent->uclamp[clamp_id].effective.value; } #endif @@ -7143,6 +7146,45 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset) } #ifdef CONFIG_UCLAMP_TASK_GROUP +static void cpu_util_update_hier(struct cgroup_subsys_state *css, + int clamp_id, unsigned int value) +{ + struct cgroup_subsys_state *top_css = css; + struct uclamp_se *uc_se, *uc_parent; + + css_for_each_descendant_pre(css, top_css) { + /* + * The first visited task group is top_css, which clamp value + * is the one passed as parameter. For descendent task + * groups we consider their current value. + */ + uc_se = &css_tg(css)->uclamp[clamp_id]; + if (css != top_css) + value = uc_se->value; + + /* + * Skip the whole subtrees if the current effective clamp is + * already matching the TG's clamp value. + * In this case, all the subtrees already have top_value, or a + * more restrictive value, as effective clamp. + */ + uc_parent = &css_tg(css)->parent->uclamp[clamp_id]; + if (uc_se->effective.value == value && + uc_parent->effective.value >= value) { + css = css_rightmost_descendant(css); + continue; + } + + /* Propagate the most restrictive effective value */ + if (uc_parent->effective.value < value) + value = uc_parent->effective.value; + if (uc_se->effective.value == value) + continue; + + uc_se->effective.value = value; + } +} + static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, struct cftype *cftype, u64 min_value) { @@ -7162,6 +7204,9 @@ static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, goto out; } + /* Update effective clamps to track the most restrictive value */ + cpu_util_update_hier(css, UCLAMP_MIN, min_value); + out: rcu_read_unlock(); @@ -7187,6 +7232,9 @@ static int cpu_util_max_write_u64(struct cgroup_subsys_state *css, goto out; } + /* Update effective clamps to track the most restrictive value */ + cpu_util_update_hier(css, UCLAMP_MAX, max_value); + out: rcu_read_unlock(); @@ -7194,14 +7242,17 @@ static int cpu_util_max_write_u64(struct cgroup_subsys_state *css, } static inline u64 cpu_uclamp_read(struct cgroup_subsys_state *css, - enum uclamp_id clamp_id) + enum uclamp_id clamp_id, + bool effective) { struct task_group *tg; u64 util_clamp; rcu_read_lock(); tg = css_tg(css); - util_clamp = tg->uclamp[clamp_id].value; + util_clamp = effective + ? tg->uclamp[clamp_id].effective.value + : tg->uclamp[clamp_id].value; rcu_read_unlock(); return util_clamp; @@ -7210,13 +7261,25 @@ static inline u64 cpu_uclamp_read(struct cgroup_subsys_state *css, static u64 cpu_util_min_read_u64(struct cgroup_subsys_state *css, struct cftype *cft) { - return cpu_uclamp_read(css, UCLAMP_MIN); + return cpu_uclamp_read(css, UCLAMP_MIN, false); } static u64 cpu_util_max_read_u64(struct cgroup_subsys_state *css, struct cftype *cft) { - return cpu_uclamp_read(css, UCLAMP_MAX); + return cpu_uclamp_read(css, UCLAMP_MAX, false); +} + +static u64 cpu_util_min_effective_read_u64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return cpu_uclamp_read(css, UCLAMP_MIN, true); +} + +static u64 cpu_util_max_effective_read_u64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return cpu_uclamp_read(css, UCLAMP_MAX, true); } #endif /* CONFIG_UCLAMP_TASK_GROUP */ @@ -7564,11 +7627,19 @@ static struct cftype cpu_legacy_files[] = { .read_u64 = cpu_util_min_read_u64, .write_u64 = cpu_util_min_write_u64, }, + { + .name = "util.min.effective", + .read_u64 = cpu_util_min_effective_read_u64, + }, { .name = "util.max", .read_u64 = cpu_util_max_read_u64, .write_u64 = cpu_util_max_write_u64, }, + { + .name = "util.max.effective", + .read_u64 = cpu_util_max_effective_read_u64, + }, #endif { } /* Terminate */ }; @@ -7744,12 +7815,22 @@ static struct cftype cpu_files[] = { .read_u64 = cpu_util_min_read_u64, .write_u64 = cpu_util_min_write_u64, }, + { + .name = "util.min.effective", + .flags = CFTYPE_NOT_ON_ROOT, + .read_u64 = cpu_util_min_effective_read_u64, + }, { .name = "util.max", .flags = CFTYPE_NOT_ON_ROOT, .read_u64 = cpu_util_max_read_u64, .write_u64 = cpu_util_max_write_u64, }, + { + .name = "util.max.effective", + .flags = CFTYPE_NOT_ON_ROOT, + .read_u64 = cpu_util_max_effective_read_u64, + }, #endif { } /* terminate */ }; From patchwork Tue Jan 15 10:15:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764203 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B09A1390 for ; Tue, 15 Jan 2019 10:16:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECB912AFF2 for ; Tue, 15 Jan 2019 10:16:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E02572B3AB; Tue, 15 Jan 2019 10:16:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45F832AFF2 for ; Tue, 15 Jan 2019 10:16:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728939AbfAOKQP (ORCPT ); Tue, 15 Jan 2019 05:16:15 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:47022 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728937AbfAOKQO (ORCPT ); Tue, 15 Jan 2019 05:16:14 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CE7DA15AD; Tue, 15 Jan 2019 02:16:13 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B78433F70D; Tue, 15 Jan 2019 02:16:10 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 14/16] sched/core: uclamp: Map TG's clamp values into CPU's clamp buckets Date: Tue, 15 Jan 2019 10:15:11 +0000 Message-Id: <20190115101513.2822-15-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Utilization clamping requires to map each different clamp value into one of the available clamp buckets used at {en,de}queue time (fast-path). Each time a TG's clamp value sysfs attribute is updated via: cpu_util_{min,max}_write_u64() we need to update the task group reference to the new value's clamp bucket and release the reference to the previous one. Ensure that, whenever a task group is assigned a specific clamp_value, this is properly translated into a unique clamp bucket to be used in the fast-path. Do it by slightly refactoring uclamp_bucket_inc() to make the (*task_struct) parameter optional and by reusing the code already available for the per-task API. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo --- Changes in v6: Others: - wholesale s/group/bucket/ - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs --- include/linux/sched.h | 4 ++-- kernel/sched/core.c | 53 +++++++++++++++++++++++++++++++++---------- 2 files changed, 43 insertions(+), 14 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 05d286524d70..3f02128fe6b2 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -617,8 +617,8 @@ struct sched_dl_entity { * uclamp_bucket_dec() - for the old clamp value * * The active bit is set whenever a task has got an effective clamp bucket - * and value assigned, which can be different from the user requested ones. - * This allows to know a task is actually refcounting a CPU's clamp bucket. + * and value assigned, and it allows to know a task is actually refcounting a + * CPU's clamp bucket. */ struct uclamp_se { unsigned int value : bits_per(SCHED_CAPACITY_SCALE); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ddbd591b305c..734b769db2ca 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1298,9 +1298,9 @@ static void __init init_uclamp(void) #ifdef CONFIG_UCLAMP_TASK_GROUP /* Init root TG's clamp bucket */ uc_se = &root_task_group.uclamp[clamp_id]; - uc_se->value = uclamp_none(clamp_id); - uc_se->bucket_id = 0; - uc_se->effective.value = uclamp_none(clamp_id); + uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(UCLAMP_MAX)); + uc_se->effective.bucket_id = uc_se->bucket_id; + uc_se->effective.value = uc_se->value; #endif } } @@ -6880,6 +6880,16 @@ void ia64_set_curr_task(int cpu, struct task_struct *p) /* task_group_lock serializes the addition/removal of task groups */ static DEFINE_SPINLOCK(task_group_lock); +static inline void free_uclamp_sched_group(struct task_group *tg) +{ +#ifdef CONFIG_UCLAMP_TASK_GROUP + int clamp_id; + + for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) + uclamp_bucket_dec(clamp_id, tg->uclamp[clamp_id].bucket_id); +#endif +} + static inline int alloc_uclamp_sched_group(struct task_group *tg, struct task_group *parent) { @@ -6887,12 +6897,12 @@ static inline int alloc_uclamp_sched_group(struct task_group *tg, int clamp_id; for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { - tg->uclamp[clamp_id].value = - parent->uclamp[clamp_id].value; - tg->uclamp[clamp_id].bucket_id = - parent->uclamp[clamp_id].bucket_id; + uclamp_bucket_inc(NULL, &tg->uclamp[clamp_id], clamp_id, + parent->uclamp[clamp_id].value); tg->uclamp[clamp_id].effective.value = parent->uclamp[clamp_id].effective.value; + tg->uclamp[clamp_id].effective.bucket_id = + parent->uclamp[clamp_id].effective.bucket_id; } #endif @@ -6901,6 +6911,7 @@ static inline int alloc_uclamp_sched_group(struct task_group *tg, static void sched_free_group(struct task_group *tg) { + free_uclamp_sched_group(tg); free_fair_sched_group(tg); free_rt_sched_group(tg); autogroup_free(tg); @@ -7147,7 +7158,8 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset) #ifdef CONFIG_UCLAMP_TASK_GROUP static void cpu_util_update_hier(struct cgroup_subsys_state *css, - int clamp_id, unsigned int value) + unsigned int clamp_id, unsigned int bucket_id, + unsigned int value) { struct cgroup_subsys_state *top_css = css; struct uclamp_se *uc_se, *uc_parent; @@ -7159,8 +7171,10 @@ static void cpu_util_update_hier(struct cgroup_subsys_state *css, * groups we consider their current value. */ uc_se = &css_tg(css)->uclamp[clamp_id]; - if (css != top_css) + if (css != top_css) { value = uc_se->value; + bucket_id = uc_se->effective.bucket_id; + } /* * Skip the whole subtrees if the current effective clamp is @@ -7176,12 +7190,15 @@ static void cpu_util_update_hier(struct cgroup_subsys_state *css, } /* Propagate the most restrictive effective value */ - if (uc_parent->effective.value < value) + if (uc_parent->effective.value < value) { value = uc_parent->effective.value; + bucket_id = uc_parent->effective.bucket_id; + } if (uc_se->effective.value == value) continue; uc_se->effective.value = value; + uc_se->effective.bucket_id = bucket_id; } } @@ -7194,6 +7211,7 @@ static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, if (min_value > SCHED_CAPACITY_SCALE) return -ERANGE; + mutex_lock(&uclamp_mutex); rcu_read_lock(); tg = css_tg(css); @@ -7204,11 +7222,16 @@ static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, goto out; } + /* Update TG's reference count */ + uclamp_bucket_inc(NULL, &tg->uclamp[UCLAMP_MIN], UCLAMP_MIN, min_value); + /* Update effective clamps to track the most restrictive value */ - cpu_util_update_hier(css, UCLAMP_MIN, min_value); + cpu_util_update_hier(css, UCLAMP_MIN, tg->uclamp[UCLAMP_MIN].bucket_id, + min_value); out: rcu_read_unlock(); + mutex_unlock(&uclamp_mutex); return ret; } @@ -7222,6 +7245,7 @@ static int cpu_util_max_write_u64(struct cgroup_subsys_state *css, if (max_value > SCHED_CAPACITY_SCALE) return -ERANGE; + mutex_lock(&uclamp_mutex); rcu_read_lock(); tg = css_tg(css); @@ -7232,11 +7256,16 @@ static int cpu_util_max_write_u64(struct cgroup_subsys_state *css, goto out; } + /* Update TG's reference count */ + uclamp_bucket_inc(NULL, &tg->uclamp[UCLAMP_MAX], UCLAMP_MAX, max_value); + /* Update effective clamps to track the most restrictive value */ - cpu_util_update_hier(css, UCLAMP_MAX, max_value); + cpu_util_update_hier(css, UCLAMP_MAX, tg->uclamp[UCLAMP_MAX].bucket_id, + max_value); out: rcu_read_unlock(); + mutex_unlock(&uclamp_mutex); return ret; } From patchwork Tue Jan 15 10:15:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764207 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C3AD014E5 for ; Tue, 15 Jan 2019 10:16:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B14312AFB3 for ; Tue, 15 Jan 2019 10:16:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A54FA2B175; Tue, 15 Jan 2019 10:16:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 109AF2AFB3 for ; Tue, 15 Jan 2019 10:16:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728353AbfAOKQT (ORCPT ); Tue, 15 Jan 2019 05:16:19 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:47044 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728950AbfAOKQS (ORCPT ); Tue, 15 Jan 2019 05:16:18 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3096C1596; Tue, 15 Jan 2019 02:16:17 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1A0A23F70D; Tue, 15 Jan 2019 02:16:13 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 15/16] sched/core: uclamp: Use TG's clamps to restrict TASK's clamps Date: Tue, 15 Jan 2019 10:15:12 +0000 Message-Id: <20190115101513.2822-16-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When a task specific clamp value is configured via sched_setattr(2), this value is accounted in the corresponding clamp bucket every time the task is {en,de}qeued. However, when cgroups are also in use, the task specific clamp values could be restricted by the task_group (TG) clamp values. Update uclamp_cpu_inc() to aggregate task and TG clamp values. Every time a task is enqueued, it's accounted in the clamp_bucket defining the smaller clamp between the task specific value and its TG effective value. This allows to: 1. ensure cgroup clamps are always used to restrict task specific requests, i.e. boosted only up to the effective granted value or clamped at least to a certain value 2. implement a "nice-like" policy, where tasks are still allowed to request less then what enforced by their current TG This mimics what already happens for a task's CPU affinity mask when the task is also in a cpuset, i.e. cgroup attributes are always used to restrict per-task attributes. Do this by exploiting the concept of "effective" clamp, which is already used by a TG to track parent enforced restrictions. Apply task group clamp restrictions only to tasks belonging to a child group. While, for tasks in the root group or in an autogroup, only system defaults are enforced. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo --- Changes in v6: Others: - wholesale s/group/bucket/ --- include/linux/sched.h | 10 ++++++++++ kernel/sched/core.c | 42 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 51 insertions(+), 1 deletion(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 3f02128fe6b2..bb4e3b1085f9 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -602,6 +602,7 @@ struct sched_dl_entity { * @bucket_id: the bucket index used by the fast-path * @mapped: the bucket index is valid * @active: the se is currently refcounted in a CPU's clamp bucket + * @user_defined: calmp value explicitly required from user-space * * A utilization clamp bucket maps a: * clamp value (value), i.e. @@ -619,12 +620,21 @@ struct sched_dl_entity { * The active bit is set whenever a task has got an effective clamp bucket * and value assigned, and it allows to know a task is actually refcounting a * CPU's clamp bucket. + * + * The user_defined bit is set whenever a task has got a task-specific clamp + * value requested from userspace, i.e. the system defaults apply to this + * task just as a restriction. This allows to relax TG's clamps when a less + * restrictive task specific value has been defined, thus allowing to + * implement a "nice" semantic when both task bucket and task specific values + * are used. For example, a task running on a 20% boosted TG can still drop + * its own boosting to 0%. */ struct uclamp_se { unsigned int value : bits_per(SCHED_CAPACITY_SCALE); unsigned int bucket_id : bits_per(UCLAMP_BUCKETS); unsigned int mapped : 1; unsigned int active : 1; + unsigned int user_defined : 1; /* * Clamp bucket and value actually used by a scheduling entity, * i.e. a (RUNNABLE) task or a task group. diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 734b769db2ca..c8d1fc9880ff 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -845,10 +845,23 @@ static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id, WRITE_ONCE(rq->uclamp[clamp_id].value, max_value); } +static inline bool uclamp_apply_defaults(struct task_struct *p) +{ + if (!IS_ENABLED(CONFIG_UCLAMP_TASK_GROUP)) + return true; + if (task_group_is_autogroup(task_group(p))) + return true; + if (task_group(p) == &root_task_group) + return true; + return false; +} + /* * The effective clamp bucket index of a task depends on, by increasing * priority: * - the task specific clamp value, explicitly requested from userspace + * - the task group effective clamp value, for tasks not in the root group or + * in an autogroup * - the system default clamp value, defined by the sysadmin * * As a side effect, update the task's effective value: @@ -865,6 +878,29 @@ uclamp_effective_get(struct task_struct *p, unsigned int clamp_id, *clamp_value = p->uclamp[clamp_id].value; *bucket_id = p->uclamp[clamp_id].bucket_id; + if (!uclamp_apply_defaults(p)) { +#ifdef CONFIG_UCLAMP_TASK_GROUP + unsigned int clamp_max, bucket_max; + struct uclamp_se *tg_clamp; + + tg_clamp = &task_group(p)->uclamp[clamp_id]; + clamp_max = tg_clamp->effective.value; + bucket_max = tg_clamp->effective.bucket_id; + + if (!p->uclamp[clamp_id].user_defined || + *clamp_value > clamp_max) { + *clamp_value = clamp_max; + *bucket_id = bucket_max; + } +#endif + /* + * If we have task groups and we are running in a child group, + * system default does not apply anymore since we assume task + * group clamps are properly configured. + */ + return; + } + /* RT tasks have different default values */ default_clamp = task_has_rt_policy(p) ? uclamp_default_perf @@ -1223,10 +1259,12 @@ static int __setscheduler_uclamp(struct task_struct *p, mutex_lock(&uclamp_mutex); if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MIN) { + p->uclamp[UCLAMP_MIN].user_defined = true; uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MIN], UCLAMP_MIN, lower_bound); } if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MAX) { + p->uclamp[UCLAMP_MAX].user_defined = true; uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MAX], UCLAMP_MAX, upper_bound); } @@ -1259,8 +1297,10 @@ static void uclamp_fork(struct task_struct *p, bool reset) for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { unsigned int clamp_value = p->uclamp[clamp_id].value; - if (unlikely(reset)) + if (unlikely(reset)) { clamp_value = uclamp_none(clamp_id); + p->uclamp[clamp_id].user_defined = false; + } p->uclamp[clamp_id].mapped = false; p->uclamp[clamp_id].active = false; From patchwork Tue Jan 15 10:15:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 10764205 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 967901390 for ; Tue, 15 Jan 2019 10:16:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 845572AFB3 for ; Tue, 15 Jan 2019 10:16:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 77E9A2B175; Tue, 15 Jan 2019 10:16:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C79F72AFF2 for ; Tue, 15 Jan 2019 10:16:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728967AbfAOKQV (ORCPT ); Tue, 15 Jan 2019 05:16:21 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:47060 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728961AbfAOKQU (ORCPT ); Tue, 15 Jan 2019 05:16:20 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8681815AD; Tue, 15 Jan 2019 02:16:20 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6F90A3F70D; Tue, 15 Jan 2019 02:16:17 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 16/16] sched/core: uclamp: Update CPU's refcount on TG's clamp changes Date: Tue, 15 Jan 2019 10:15:13 +0000 Message-Id: <20190115101513.2822-17-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On updates of task group (TG) clamp values, ensure that these new values are enforced on all RUNNABLE tasks of the task group, i.e. all RUNNABLE tasks are immediately boosted and/or clamped as requested. Do that by slightly refactoring uclamp_bucket_inc(). An additional parameter *cgroup_subsys_state (css) is used to walk the list of tasks in the TGs and update the RUNNABLE ones. Do that by taking the rq lock for each task, the same mechanism used for cpu affinity masks updates. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo --- Changes in v6: Others: - wholesale s/group/bucket/ - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs - small documentation updates --- kernel/sched/core.c | 56 +++++++++++++++++++++++++++++++++------------ 1 file changed, 42 insertions(+), 14 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c8d1fc9880ff..36866a1b9f9d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1111,7 +1111,22 @@ static void uclamp_bucket_dec(unsigned int clamp_id, unsigned int bucket_id) &uc_map_old.data, uc_map_new.data)); } -static void uclamp_bucket_inc(struct task_struct *p, struct uclamp_se *uc_se, +static inline void uclamp_bucket_inc_tg(struct cgroup_subsys_state *css, + int clamp_id, unsigned int bucket_id) +{ + struct css_task_iter it; + struct task_struct *p; + + /* Update clamp buckets for RUNNABLE tasks in this TG */ + css_task_iter_start(css, 0, &it); + while ((p = css_task_iter_next(&it))) + uclamp_task_update_active(p, clamp_id); + css_task_iter_end(&it); +} + +static void uclamp_bucket_inc(struct task_struct *p, + struct cgroup_subsys_state *css, + struct uclamp_se *uc_se, unsigned int clamp_id, unsigned int clamp_value) { union uclamp_map *uc_maps = &uclamp_maps[clamp_id][0]; @@ -1183,6 +1198,9 @@ static void uclamp_bucket_inc(struct task_struct *p, struct uclamp_se *uc_se, uc_se->value = clamp_value; uc_se->bucket_id = bucket_id; + if (css) + uclamp_bucket_inc_tg(css, clamp_id, bucket_id); + if (p) uclamp_task_update_active(p, clamp_id); @@ -1221,11 +1239,11 @@ int sched_uclamp_handler(struct ctl_table *table, int write, } if (old_min != sysctl_sched_uclamp_util_min) { - uclamp_bucket_inc(NULL, &uclamp_default[UCLAMP_MIN], + uclamp_bucket_inc(NULL, NULL, &uclamp_default[UCLAMP_MIN], UCLAMP_MIN, sysctl_sched_uclamp_util_min); } if (old_max != sysctl_sched_uclamp_util_max) { - uclamp_bucket_inc(NULL, &uclamp_default[UCLAMP_MAX], + uclamp_bucket_inc(NULL, NULL, &uclamp_default[UCLAMP_MAX], UCLAMP_MAX, sysctl_sched_uclamp_util_max); } goto done; @@ -1260,12 +1278,12 @@ static int __setscheduler_uclamp(struct task_struct *p, mutex_lock(&uclamp_mutex); if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MIN) { p->uclamp[UCLAMP_MIN].user_defined = true; - uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MIN], + uclamp_bucket_inc(p, NULL, &p->uclamp[UCLAMP_MIN], UCLAMP_MIN, lower_bound); } if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MAX) { p->uclamp[UCLAMP_MAX].user_defined = true; - uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MAX], + uclamp_bucket_inc(p, NULL, &p->uclamp[UCLAMP_MAX], UCLAMP_MAX, upper_bound); } mutex_unlock(&uclamp_mutex); @@ -1304,7 +1322,7 @@ static void uclamp_fork(struct task_struct *p, bool reset) p->uclamp[clamp_id].mapped = false; p->uclamp[clamp_id].active = false; - uclamp_bucket_inc(NULL, &p->uclamp[clamp_id], + uclamp_bucket_inc(NULL, NULL, &p->uclamp[clamp_id], clamp_id, clamp_value); } } @@ -1326,19 +1344,23 @@ static void __init init_uclamp(void) memset(uclamp_maps, 0, sizeof(uclamp_maps)); for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { uc_se = &init_task.uclamp[clamp_id]; - uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(clamp_id)); + uclamp_bucket_inc(NULL, NULL, uc_se, clamp_id, + uclamp_none(clamp_id)); uc_se = &uclamp_default[clamp_id]; - uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(clamp_id)); + uclamp_bucket_inc(NULL, NULL, uc_se, clamp_id, + uclamp_none(clamp_id)); /* RT tasks by default will go to max frequency */ uc_se = &uclamp_default_perf[clamp_id]; - uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(UCLAMP_MAX)); + uclamp_bucket_inc(NULL, NULL, uc_se, clamp_id, + uclamp_none(UCLAMP_MAX)); #ifdef CONFIG_UCLAMP_TASK_GROUP /* Init root TG's clamp bucket */ uc_se = &root_task_group.uclamp[clamp_id]; - uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(UCLAMP_MAX)); + uclamp_bucket_inc(NULL, NULL, uc_se, clamp_id, + uclamp_none(UCLAMP_MAX)); uc_se->effective.bucket_id = uc_se->bucket_id; uc_se->effective.value = uc_se->value; #endif @@ -6937,8 +6959,8 @@ static inline int alloc_uclamp_sched_group(struct task_group *tg, int clamp_id; for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { - uclamp_bucket_inc(NULL, &tg->uclamp[clamp_id], clamp_id, - parent->uclamp[clamp_id].value); + uclamp_bucket_inc(NULL, NULL, &tg->uclamp[clamp_id], + clamp_id, parent->uclamp[clamp_id].value); tg->uclamp[clamp_id].effective.value = parent->uclamp[clamp_id].effective.value; tg->uclamp[clamp_id].effective.bucket_id = @@ -7239,6 +7261,10 @@ static void cpu_util_update_hier(struct cgroup_subsys_state *css, uc_se->effective.value = value; uc_se->effective.bucket_id = bucket_id; + + /* Immediately updated descendants active tasks */ + if (css != top_css) + uclamp_bucket_inc_tg(css, clamp_id, bucket_id); } } @@ -7263,7 +7289,8 @@ static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, } /* Update TG's reference count */ - uclamp_bucket_inc(NULL, &tg->uclamp[UCLAMP_MIN], UCLAMP_MIN, min_value); + uclamp_bucket_inc(NULL, css, &tg->uclamp[UCLAMP_MIN], + UCLAMP_MIN, min_value); /* Update effective clamps to track the most restrictive value */ cpu_util_update_hier(css, UCLAMP_MIN, tg->uclamp[UCLAMP_MIN].bucket_id, @@ -7297,7 +7324,8 @@ static int cpu_util_max_write_u64(struct cgroup_subsys_state *css, } /* Update TG's reference count */ - uclamp_bucket_inc(NULL, &tg->uclamp[UCLAMP_MAX], UCLAMP_MAX, max_value); + uclamp_bucket_inc(NULL, css, &tg->uclamp[UCLAMP_MAX], + UCLAMP_MAX, max_value); /* Update effective clamps to track the most restrictive value */ cpu_util_update_hier(css, UCLAMP_MAX, tg->uclamp[UCLAMP_MAX].bucket_id,