From patchwork Sun May 11 18:16:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuyang Du X-Patchwork-Id: 4155061 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 676489F170 for ; Mon, 12 May 2014 02:24:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8648F201BF for ; Mon, 12 May 2014 02:24:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 09EB2201BB for ; Mon, 12 May 2014 02:24:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753008AbaELCX4 (ORCPT ); Sun, 11 May 2014 22:23:56 -0400 Received: from mga01.intel.com ([192.55.52.88]:8857 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758526AbaELCXA (ORCPT ); Sun, 11 May 2014 22:23:00 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP; 11 May 2014 19:22:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,1032,1389772800"; d="scan'208";a="530200867" Received: from dalvikqa005-desktop.bj.intel.com ([10.238.151.105]) by fmsmga001.fm.intel.com with ESMTP; 11 May 2014 19:22:41 -0700 From: Yuyang Du To: mingo@redhat.com, peterz@infradead.org, rafael.j.wysocki@intel.com, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: arjan.van.de.ven@intel.com, len.brown@intel.com, alan.cox@intel.com, mark.gross@intel.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, rajeev.d.muralidhar@intel.com, vishwesh.m.rudramuni@intel.com, nicole.chalhoub@intel.com, ajaya.durg@intel.com, harinarayanan.seshadri@intel.com, jacob.jun.pan@linux.intel.com, fengguang.wu@intel.com, yuyang.du@intel.com Subject: [RFC PATCH 10/12 v2] Intercept periodic nohz idle balancing Date: Mon, 12 May 2014 02:16:59 +0800 Message-Id: <1399832221-8314-11-git-send-email-yuyang.du@intel.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1399832221-8314-1-git-send-email-yuyang.du@intel.com> References: <1399832221-8314-1-git-send-email-yuyang.du@intel.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.0 required=5.0 tests=BAYES_00, DATE_IN_PAST_06_12, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We intercept load balancing to contain the load and load balancing in the consolidated CPUs according to our consolidating mechanism. In periodic nohz idle balance, we skip the idle but non-consolidated CPUs from load balancing. Signed-off-by: Yuyang Du --- kernel/sched/fair.c | 50 +++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 43 insertions(+), 7 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 94c7a6a..9bb1304 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6867,10 +6867,46 @@ static struct { static inline int find_new_ilb(void) { +#ifdef CONFIG_WORKLOAD_CONSOLIDATION + struct cpumask *nonshielded = __get_cpu_var(local_cpu_mask); + int ilb, weight; + int this_cpu = smp_processor_id(); + + /* + * Optimize for the case when we have no idle CPUs or only one + * idle CPU. Don't walk the sched_domain hierarchy in such cases + */ + if (cpumask_weight(nohz.idle_cpus_mask) < 2) + return nr_cpu_ids; + + ilb = cpumask_first(nohz.idle_cpus_mask); + + if (ilb < nr_cpu_ids && idle_cpu(ilb)) { + + cpumask_copy(nonshielded, nohz.idle_cpus_mask); + + rcu_read_lock(); + workload_consolidation_nonshielded_mask(this_cpu, nonshielded); + rcu_read_unlock(); + + weight = cpumask_weight(nonshielded); + + if (weight < 2) + return nr_cpu_ids; + + /* + * get idle load balancer again + */ + ilb = cpumask_first(nonshielded); + if (ilb < nr_cpu_ids && idle_cpu(ilb)) + return ilb; + } +#else int ilb = cpumask_first(nohz.idle_cpus_mask); if (ilb < nr_cpu_ids && idle_cpu(ilb)) return ilb; +#endif return nr_cpu_ids; } @@ -7107,7 +7143,7 @@ out: * In CONFIG_NO_HZ_COMMON case, the idle balance kickee will do the * rebalancing for all the cpus for whom scheduler ticks are stopped. */ -static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) +static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle, struct cpumask *mask) { int this_cpu = this_rq->cpu; struct rq *rq; @@ -7117,7 +7153,7 @@ static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) !test_bit(NOHZ_BALANCE_KICK, nohz_flags(this_cpu))) goto end; - for_each_cpu(balance_cpu, nohz.idle_cpus_mask) { + for_each_cpu(balance_cpu, mask) { if (balance_cpu == this_cpu || !idle_cpu(balance_cpu)) continue; @@ -7165,10 +7201,10 @@ static inline int nohz_kick_needed(struct rq *rq) if (unlikely(rq->idle_balance)) return 0; - /* - * We may be recently in ticked or tickless idle mode. At the first - * busy tick after returning from idle, we will update the busy stats. - */ + /* + * We may be recently in ticked or tickless idle mode. At the first + * busy tick after returning from idle, we will update the busy stats. + */ set_cpu_sd_state_busy(); nohz_balance_exit_idle(cpu); @@ -7211,7 +7247,7 @@ need_kick: return 1; } #else -static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) { } +static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle, struct cpumask *mask) { } #endif /*