From patchwork Thu Apr 25 17:23:26 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 2489561 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork2.kernel.org (Postfix) with ESMTP id 3F7CDDF5B1 for ; Thu, 25 Apr 2013 18:27:29 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UVPxj-0001qr-5J; Thu, 25 Apr 2013 17:28:05 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UVPwU-0006v4-Pd; Thu, 25 Apr 2013 17:26:46 +0000 Received: from mail-we0-x22f.google.com ([2a00:1450:400c:c03::22f]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UVPuy-0006jn-Jg for linux-arm-kernel@lists.infradead.org; Thu, 25 Apr 2013 17:25:32 +0000 Received: by mail-we0-f175.google.com with SMTP id t11so2912210wey.34 for ; Thu, 25 Apr 2013 10:25:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=DQ7dlCEEWgMvAPEVAM3pk7bF0N9g2AM5MdsrslAgBnM=; b=QbDhApZuY8CymjMblrBEA1NSiExcOt3IAwVUUx4nK5XtlArmyXlF4l9JUFSOfui03K ai5ka00aAiPAPVS/lXUqJYXX+Bt0xw5fAAeSirqZ4EncjbeYN3+2VgQDiTflI0UgKzwr DhRX6SJtmDKv7njWkaew54cnohlvGa7PKiBo/A9hpSkykpq2e9k53+g1OgFl0PhfVUVD Il3LTpM0D7M2GPas0DhyPnsv79SfSvTm2yLo4GpE2Wux4CILgV4b7eci0CAYUPiMGpfZ 9FJOWf4/5mW5SkEPBZo2WK+H/aLN1ULHqrYP27a5ErSZG9TZX+IVsMR94SM3r63WCMJF PB/g== X-Received: by 10.180.39.207 with SMTP id r15mr39140379wik.16.1366910710128; Thu, 25 Apr 2013 10:25:10 -0700 (PDT) Received: from localhost.localdomain (LPuteaux-156-14-44-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id q13sm12311485wie.8.2013.04.25.10.25.08 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 25 Apr 2013 10:25:09 -0700 (PDT) From: Vincent Guittot To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linaro-kernel@lists.linaro.org, peterz@infradead.org, mingo@kernel.org, linux@arm.linux.org.uk, pjt@google.com, santosh.shilimkar@ti.com, Morten.Rasmussen@arm.com, chander.kashyap@linaro.org, cmetcalf@tilera.com, tony.luck@intel.com, alex.shi@intel.com, preeti@linux.vnet.ibm.com Subject: [PATCH 10/14] sched: update the buddy CPU Date: Thu, 25 Apr 2013 19:23:26 +0200 Message-Id: <1366910611-20048-11-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1366910611-20048-1-git-send-email-vincent.guittot@linaro.org> References: <1366910611-20048-1-git-send-email-vincent.guittot@linaro.org> X-Gm-Message-State: ALoCoQlIvnyqLFcoN7Ze+krvDdladEYVvP83MdBIc1AlEc4T8RA3Ns5LSnNefp4Gt1BnbcySQLPm X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130425_132512_971517_C753B672 X-CRM114-Status: GOOD ( 18.92 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: len.brown@intel.com, l.majewski@samsung.com, Vincent Guittot , corbet@lwn.net, amit.kucheria@linaro.org, tglx@linutronix.de, paulmck@linux.vnet.ibm.com, arjan@linux.intel.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Periodically updates the buddy of a CPU according to the current activity of the system. A CPU is its own buddy if it participates to the packing effort. Otherwise, it points to a CPU that participates to the packing effort. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 86 insertions(+), 5 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 234ecdd..28f8ea7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -174,11 +174,17 @@ void sched_init_granularity(void) #ifdef CONFIG_SMP +static unsigned long power_of(int cpu) +{ + return cpu_rq(cpu)->cpu_power; +} + /* * Save the id of the optimal CPU that should be used to pack small tasks * The value -1 is used when no buddy has been found */ DEFINE_PER_CPU(int, sd_pack_buddy); +DEFINE_PER_CPU(struct sched_domain *, sd_pack_domain); /* * Look for the best buddy CPU that can be used to pack small tasks @@ -237,6 +243,68 @@ void update_packing_domain(int cpu) } pr_debug("CPU%d packing on CPU%d\n", cpu, id); + per_cpu(sd_pack_domain, cpu) = sd; + per_cpu(sd_pack_buddy, cpu) = id; +} + +void update_packing_buddy(int cpu, int activity) +{ + struct sched_domain *sd = per_cpu(sd_pack_domain, cpu); + struct sched_group *sg, *pack, *tmp; + int id = cpu; + + if (!sd) + return; + + /* + * The sched_domain of a CPU points on the local sched_group + * and this CPU of this local group is a good candidate + */ + pack = sg = sd->groups; + + /* loop the sched groups to find the best one */ + for (tmp = sg->next; tmp != sg; tmp = tmp->next) { + if ((tmp->sgp->power * pack->group_weight) > + (pack->sgp->power_available * tmp->group_weight)) + continue; + + if (((tmp->sgp->power * pack->group_weight) == + (pack->sgp->power * tmp->group_weight)) + && (cpumask_first(sched_group_cpus(tmp)) >= id)) + continue; + + /* we have found a better group */ + pack = tmp; + + /* Take the 1st CPU of the new group */ + id = cpumask_first(sched_group_cpus(pack)); + } + + if ((cpu == id) || (activity <= power_of(id))) { + per_cpu(sd_pack_buddy, cpu) = id; + return; + } + + for (tmp = pack; activity > 0; tmp = tmp->next) { + if (tmp->sgp->power > activity) { + id = cpumask_first(sched_group_cpus(tmp)); + activity -= power_of(id); + if (cpu == id) + activity = 0; + while ((activity > 0) && (id < nr_cpu_ids)) { + id = cpumask_next(id, sched_group_cpus(tmp)); + activity -= power_of(id); + if (cpu == id) + activity = 0; + } + } else if (cpumask_test_cpu(cpu, sched_group_cpus(tmp))) { + id = cpu; + activity = 0; + } else { + activity -= tmp->sgp->power; + } + } + per_cpu(sd_pack_buddy, cpu) = id; } @@ -3014,11 +3082,6 @@ static unsigned long target_load(int cpu, int type) return max(rq->cpu_load[type-1], total); } -static unsigned long power_of(int cpu) -{ - return cpu_rq(cpu)->cpu_power; -} - static unsigned long cpu_avg_load_per_task(int cpu) { struct rq *rq = cpu_rq(cpu); @@ -4740,6 +4803,22 @@ static bool update_sd_pick_busiest(struct lb_env *env, return false; } +static void update_plb_buddy(int cpu, int *balance, struct sd_lb_stats *sds, + struct sched_domain *sd) +{ + int buddy; + + if (sysctl_sched_packing_mode != SCHED_PACKING_FULL) + return; + + /* Update my buddy */ + if (sd == per_cpu(sd_pack_domain, cpu)) + update_packing_buddy(cpu, sds->total_activity); + + /* Get my new buddy */ + buddy = per_cpu(sd_pack_buddy, cpu); +} + /** * update_sd_lb_stats - Update sched_domain's statistics for load balancing. * @env: The load balancing environment. @@ -4807,6 +4886,8 @@ static inline void update_sd_lb_stats(struct lb_env *env, sg = sg->next; } while (sg != env->sd->groups); + + update_plb_buddy(env->dst_cpu, balance, sds, env->sd); } /**