From patchwork Fri May 30 06:36:12 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuyang Du X-Patchwork-Id: 4271241 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C00ED9F1D6 for ; Fri, 30 May 2014 14:42:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F12E020397 for ; Fri, 30 May 2014 14:42:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AA3A720396 for ; Fri, 30 May 2014 14:42:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933600AbaE3OmL (ORCPT ); Fri, 30 May 2014 10:42:11 -0400 Received: from mga02.intel.com ([134.134.136.20]:16375 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964990AbaE3OmK (ORCPT ); Fri, 30 May 2014 10:42:10 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 30 May 2014 07:42:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.98,941,1392192000"; d="scan'208";a="548947258" Received: from dalvikqa005-desktop.bj.intel.com ([10.238.151.105]) by orsmga002.jf.intel.com with ESMTP; 30 May 2014 07:41:59 -0700 From: Yuyang Du To: mingo@redhat.com, peterz@infradead.org, rafael.j.wysocki@intel.com, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: arjan.van.de.ven@intel.com, len.brown@intel.com, alan.cox@intel.com, mark.gross@intel.com, pjt@google.com, bsegall@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, rajeev.d.muralidhar@intel.com, vishwesh.m.rudramuni@intel.com, nicole.chalhoub@intel.com, ajaya.durg@intel.com, harinarayanan.seshadri@intel.com, jacob.jun.pan@linux.intel.com, fengguang.wu@intel.com, yuyang.du@intel.com Subject: [RFC PATCH 16/16 v3] Intercept periodic load balancing Date: Fri, 30 May 2014 14:36:12 +0800 Message-Id: <1401431772-14320-17-git-send-email-yuyang.du@intel.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1401431772-14320-1-git-send-email-yuyang.du@intel.com> References: <1401431772-14320-1-git-send-email-yuyang.du@intel.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.0 required=5.0 tests=BAYES_00, DATE_IN_PAST_06_12, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We intercept load balancing to contain the load and load balancing in the consolidated CPUs according to our consolidating mechanism. In periodic load balancing, we do two things: 1) Skip pulling task to the non-consolidated CPUs. 2) In addition, for consolidated Idle CPU, we aggressively pull tasks from non-consolidated CPUs. Signed-off-by: Yuyang Du --- kernel/sched/fair.c | 51 ++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 44 insertions(+), 7 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1b8dd45..d22ac87 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7260,17 +7260,54 @@ static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle, struc static void run_rebalance_domains(struct softirq_action *h) { struct rq *this_rq = this_rq(); + int this_cpu = cpu_of(this_rq); + struct sched_domain *sd; enum cpu_idle_type idle = this_rq->idle_balance ? CPU_IDLE : CPU_NOT_IDLE; - rebalance_domains(this_rq, idle); + rcu_read_lock(); + sd = top_flag_domain(this_cpu, SD_WORKLOAD_CONSOLIDATION); + if (sd) { + struct cpumask *nonshielded_cpus = __get_cpu_var(load_balance_mask); - /* - * If this cpu has a pending nohz_balance_kick, then do the - * balancing on behalf of the other idle cpus whose ticks are - * stopped. - */ - nohz_idle_balance(this_rq, idle); + /* + * if we encounter shielded cpus here, don't do balance on them + */ + cpumask_copy(nonshielded_cpus, cpu_active_mask); + + wc_nonshielded_mask(this_cpu, sd, nonshielded_cpus); + + /* + * aggressively unload the shielded cpus to unshielded cpus + */ + wc_unload(nonshielded_cpus, sd); + rcu_read_unlock(); + + if (cpumask_test_cpu(this_cpu, nonshielded_cpus)) { + struct cpumask *idle_cpus = __get_cpu_var(local_cpu_mask); + cpumask_and(idle_cpus, nonshielded_cpus, nohz.idle_cpus_mask); + + rebalance_domains(this_rq, idle); + + /* + * If this cpu has a pending nohz_balance_kick, then do the + * balancing on behalf of the other idle cpus whose ticks are + * stopped. + */ + nohz_idle_balance(this_rq, idle, idle_cpus); + } + } + else { + rcu_read_unlock(); + rebalance_domains(this_rq, idle); + + /* + * If this cpu has a pending nohz_balance_kick, then do the + * balancing on behalf of the other idle cpus whose ticks are + * stopped. + */ + nohz_idle_balance(this_rq, idle, nohz.idle_cpus_mask); + } } /*