From patchwork Tue May 12 19:38:47 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 6391001 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E8FE99F32B for ; Tue, 12 May 2015 19:45:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 192AD202D1 for ; Tue, 12 May 2015 19:45:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 24EE32034A for ; Tue, 12 May 2015 19:45:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933762AbbELTiT (ORCPT ); Tue, 12 May 2015 15:38:19 -0400 Received: from foss.arm.com ([217.140.101.70]:33841 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933817AbbELTiR (ORCPT ); Tue, 12 May 2015 15:38:17 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 387102A; Tue, 12 May 2015 12:37:40 -0700 (PDT) Received: from e105550-lin.cambridge.arm.com (e105550-lin.cambridge.arm.com [10.2.131.193]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D2CD93F218; Tue, 12 May 2015 12:38:14 -0700 (PDT) From: Morten Rasmussen To: peterz@infradead.org, mingo@redhat.com Cc: vincent.guittot@linaro.org, Dietmar Eggemann , yuyang.du@intel.com, preeti@linux.vnet.ibm.com, mturquette@linaro.org, rjw@rjwysocki.net, Juri Lelli , sgurrappadi@nvidia.com, pang.xunlei@zte.com.cn, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, morten.rasmussen@arm.com Subject: [RFCv4 PATCH 12/34] sched: Initialize CFS task load and usage before placing task on rq Date: Tue, 12 May 2015 20:38:47 +0100 Message-Id: <1431459549-18343-13-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1431459549-18343-1-git-send-email-morten.rasmussen@arm.com> References: <1431459549-18343-1-git-send-email-morten.rasmussen@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Task load or usage is not currently considered in select_task_rq_fair(), but if we want that in the future we should make sure it is not zero for new tasks. The load-tracking sums are currently initialized using sched_slice(), that won't work before the task has been assigned a rq. Initialization is therefore changed to another semi-arbitrary value, sched_latency, instead. cc: Ingo Molnar cc: Peter Zijlstra Signed-off-by: Morten Rasmussen --- kernel/sched/core.c | 4 ++-- kernel/sched/fair.c | 7 +++---- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 355f953..bceb3a8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2126,6 +2126,8 @@ void wake_up_new_task(struct task_struct *p) struct rq *rq; raw_spin_lock_irqsave(&p->pi_lock, flags); + /* Initialize new task's runnable average */ + init_task_runnable_average(p); #ifdef CONFIG_SMP /* * Fork balancing, do it here and not earlier because: @@ -2135,8 +2137,6 @@ void wake_up_new_task(struct task_struct *p) set_task_cpu(p, select_task_rq(p, task_cpu(p), SD_BALANCE_FORK, 0)); #endif - /* Initialize new task's runnable average */ - init_task_runnable_average(p); rq = __task_rq_lock(p); activate_task(rq, p, 0); p->on_rq = TASK_ON_RQ_QUEUED; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d045404..f20fae9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -675,11 +675,10 @@ static inline void __update_task_entity_utilization(struct sched_entity *se); /* Give new task start runnable values to heavy its load in infant time */ void init_task_runnable_average(struct task_struct *p) { - u32 slice; + u32 start_load = sysctl_sched_latency >> 10; - slice = sched_slice(task_cfs_rq(p), &p->se) >> 10; - p->se.avg.runnable_avg_sum = p->se.avg.running_avg_sum = slice; - p->se.avg.avg_period = slice; + p->se.avg.runnable_avg_sum = p->se.avg.running_avg_sum = start_load; + p->se.avg.avg_period = start_load; __update_task_entity_contrib(&p->se); __update_task_entity_utilization(&p->se); }