From patchwork Tue Mar 22 00:21:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 8636741 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B035A9F36E for ; Tue, 22 Mar 2016 00:21:39 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B7FF52034C for ; Tue, 22 Mar 2016 00:21:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B7AAD2025B for ; Tue, 22 Mar 2016 00:21:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751263AbcCVAVN (ORCPT ); Mon, 21 Mar 2016 20:21:13 -0400 Received: from mail-pf0-f179.google.com ([209.85.192.179]:32954 "EHLO mail-pf0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751200AbcCVAVM (ORCPT ); Mon, 21 Mar 2016 20:21:12 -0400 Received: by mail-pf0-f179.google.com with SMTP id 4so153510440pfd.0 for ; Mon, 21 Mar 2016 17:21:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=49hOF0DeUCTuscgX44f4BSS9Eg0+9/GTORIhzha8OR8=; b=gag0lYwn7SZrXPzBvkpLYpxOBuPeP/W/qIAAK1oyEJO6CQo7p0QhTzB6cdYY/q3DVB xTGNaFaCbxnEfaucxsxwP5HKaMKWSxiZSffgOixxBErKlXX5EHNPG21h364qWTUpAJf9 Ue/vDGp0LBjZCgKC2XQNicxCMb9SPBPB4KHm0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=49hOF0DeUCTuscgX44f4BSS9Eg0+9/GTORIhzha8OR8=; b=a3hmNqDrJ2BvoqAkxmmiGv36Dxe/0mq4cR4IskooU7ZzZnXtEV1yDeo+/I1BJ++NVg y3cUfkj/Vfd0WPOyh8kSpaeRy9Eyho8RI3DhrybcDi1zUS1Y4o7k8gKxOpqqQsh1yQKz wzpNzPm0hkksdkUan9P+4qFw4ZNZxfeQcDZtOLc6N1mJ6FFI7G1jFU6pKXT3/NVvvJM6 aUIA44XGEWNwHrk5LEMq4cpOEF07Am8ibWSBWGzb0btr1RK1//HQ7vHXE6iqHcRyz43t yR6ncVj/Fx7i8AiFXWKMa7+SgZGsKGvy6kapKFprZkjcNuHrZz/R9Vz2U5XWniVllFWw EgfA== X-Gm-Message-State: AD7BkJKOFdVEDlzYNNtEYVL42YwrHyYWPqUxv6fjWFCIwxL3cEEpe1uT5dRvzsGRFllm/jvb X-Received: by 10.66.90.136 with SMTP id bw8mr49545961pab.52.1458606071459; Mon, 21 Mar 2016 17:21:11 -0700 (PDT) Received: from sky.smuckle.net (cpe-75-80-155-7.san.res.rr.com. [75.80.155.7]) by smtp.gmail.com with ESMTPSA id 82sm43318418pfi.78.2016.03.21.17.21.09 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Mon, 21 Mar 2016 17:21:10 -0700 (PDT) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, "Rafael J. Wysocki" , Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette Subject: [PATCH 1/2] sched/fair: move cpufreq hook to update_cfs_rq_load_avg() Date: Mon, 21 Mar 2016 17:21:07 -0700 Message-Id: <1458606068-7476-1-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The cpufreq hook should be called whenever the root cfs_rq utilization changes so update_cfs_rq_load_avg() is a better place for it. The current location is not invoked in the enqueue_entity() or update_blocked_averages() paths. Suggested-by: Vincent Guittot Signed-off-by: Steve Muckle --- kernel/sched/fair.c | 50 ++++++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 46d64e4ccfde..d418deb04049 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2825,7 +2825,9 @@ static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq); static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) { struct sched_avg *sa = &cfs_rq->avg; + struct rq *rq = rq_of(cfs_rq); int decayed, removed = 0; + int cpu = cpu_of(rq); if (atomic_long_read(&cfs_rq->removed_load_avg)) { s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0); @@ -2840,7 +2842,7 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) sa->util_sum = max_t(s32, sa->util_sum - r * LOAD_AVG_MAX, 0); } - decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa, + decayed = __update_load_avg(now, cpu, sa, scale_load_down(cfs_rq->load.weight), cfs_rq->curr != NULL, cfs_rq); #ifndef CONFIG_64BIT @@ -2848,28 +2850,6 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) cfs_rq->load_last_update_time_copy = sa->last_update_time; #endif - return decayed || removed; -} - -/* Update task and its cfs_rq load average */ -static inline void update_load_avg(struct sched_entity *se, int update_tg) -{ - struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); - struct rq *rq = rq_of(cfs_rq); - int cpu = cpu_of(rq); - - /* - * Track task load average for carrying it to new CPU after migrated, and - * track group sched_entity load average for task_h_load calc in migration - */ - __update_load_avg(now, cpu, &se->avg, - se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); - - if (update_cfs_rq_load_avg(now, cfs_rq) && update_tg) - update_tg_load_avg(cfs_rq, 0); - if (cpu == smp_processor_id() && &rq->cfs == cfs_rq) { unsigned long max = rq->cpu_capacity_orig; @@ -2890,8 +2870,30 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) * See cpu_util(). */ cpufreq_update_util(rq_clock(rq), - min(cfs_rq->avg.util_avg, max), max); + min(sa->util_avg, max), max); } + + return decayed || removed; +} + +/* Update task and its cfs_rq load average */ +static inline void update_load_avg(struct sched_entity *se, int update_tg) +{ + struct cfs_rq *cfs_rq = cfs_rq_of(se); + u64 now = cfs_rq_clock_task(cfs_rq); + struct rq *rq = rq_of(cfs_rq); + int cpu = cpu_of(rq); + + /* + * Track task load average for carrying it to new CPU after migrated, and + * track group sched_entity load average for task_h_load calc in migration + */ + __update_load_avg(now, cpu, &se->avg, + se->on_rq * scale_load_down(se->load.weight), + cfs_rq->curr == se, NULL); + + if (update_cfs_rq_load_avg(now, cfs_rq) && update_tg) + update_tg_load_avg(cfs_rq, 0); } static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)