From patchwork Tue Feb 28 11:52:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9595299 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B96DB601D7 for ; Tue, 28 Feb 2017 11:54:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A637F27816 for ; Tue, 28 Feb 2017 11:54:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 994DC27CAF; Tue, 28 Feb 2017 11:54:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CC03927816 for ; Tue, 28 Feb 2017 11:54:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cigKN-00015s-31; Tue, 28 Feb 2017 11:52:23 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cigKL-00015N-HT for xen-devel@lists.xenproject.org; Tue, 28 Feb 2017 11:52:21 +0000 Received: from [85.158.139.211] by server-2.bemta-5.messagelabs.com id FD/A1-01267-47465B85; Tue, 28 Feb 2017 11:52:20 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpkleJIrShJLcpLzFFi42K5GNpwULckZWu EwadmFovvWyYzOTB6HP5whSWAMYo1My8pvyKBNeN66zK2gi/ZFfM/KjQwrg3sYuTiEBKYwSjR 3HaBBcRhEVjDKjH/0lVmEEdC4BKrROvN6+xdjJxATozE2gUnWSHsaomDh7YzgthCAioSN7evY oIY9ZNR4kFDJ1hCWEBP4sjRH+wQto/E0ol7weJsAgYSb3bsBRskIqAkcW/VZCYQm1kgSuLM8m ZmEJtFQFXiXt8OsBpeoN6OZeuA5nBwcALZy29YgJhCAt4S2x7HglSICshJrLzcAlUtKHFy5hM WkBJmAU2J9bv0IYbLS2x/O4d5AqPILCRVsxCqZiGpWsDIvIpRvTi1qCy1SNdCL6koMz2jJDcx M0fX0MBULze1uDgxPTUnMalYLzk/dxMjMPAZgGAH48Fm50OMkhxMSqK8WclbI4T4kvJTKjMSi zPii0pzUosPMWpwcAhMODt3OpMUS15+XqqSBO9kkDrBotT01Iq0zBxgbMKUSnDwKInw/kgCSv MWFyTmFmemQ6ROMRpzPDi16w0Tx6f+w2+YhMAmSYnzRoFMEgApzSjNgxsESxmXGGWlhHkZgc4 U4ilILcrNLEGVf8UozsGoJMwbCDKFJzOvBG7fK6BTmIBOeaECdkpJIkJKqoGx/UD1m0t12seX GShVxnS82lCbW/4tRdEzMSnuzoP03yKroid9tJ/WNv+r93yX6u0aCxfNt83gk4uaKxQZeKtQV 0F95+v0E0axPxZ1Cwte+cvzNuChxsqP3P9uej19X3vXi9VtJZ/md2nhQ5UCF76+X/R1ko5z2K +lml+XaO1adfOfjclU34SJSizFGYmGWsxFxYkAbRZzdRQDAAA= X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-9.tower-206.messagelabs.com!1488282739!87205467!1 X-Originating-IP: [209.85.128.193] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 35184 invoked from network); 28 Feb 2017 11:52:20 -0000 Received: from mail-wr0-f193.google.com (HELO mail-wr0-f193.google.com) (209.85.128.193) by server-9.tower-206.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 28 Feb 2017 11:52:20 -0000 Received: by mail-wr0-f193.google.com with SMTP id u48so1322308wrc.1 for ; Tue, 28 Feb 2017 03:52:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=4eOHmU3OGcWF9eRzv5hlgHNty+wYmcG9Ypw3nM5/FYA=; b=mGXyAqoEVhuGgWuRDzVNt36CrDbI7G3EEwdPldDDCFgI3+9Oo4HHMyeWlZ/wq9b/2q g13Muvuo2vkIHgNNCQpDmgd10/W67uXt/crh+C8za/iublZf5nnh8m1gVCXjpmCbK+lQ XPvmCDhzw/g13jqKHfSP4iy5BPx3rZX5JCX/anTzT7rZsVAgHENqWBMxZsESalkVXCPb 6ZTycVFCro9BdOfg2SqE1P9KBIDFL1zws4WGQguHqdWlMTvT+nTOciNHpBC9A6M58Xb3 55I8TRHlNsX3eP8i8Bt7eCSecTp+pRK4EtJCJYbmz4Yxyt8O+ff/6/MqVXJCknOh8iSm RjxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=4eOHmU3OGcWF9eRzv5hlgHNty+wYmcG9Ypw3nM5/FYA=; b=iPSETSVKgKEN3jR2XuCoHckhGLQRzFvj41jNW2NxdiEgHcIFhDnxSAmdyyvfwdkZQy mx7vur/JqZq1KK9ByqcU4ZtcdZrh+fOnh6R0dTQZUIHfsFFdJH/UHnl+BZIDc4sNm3uW +C8sBvWihyx88uA7a2ii02oTKQw7YVTHYhmkdyoX2hAgv+5KmGTHnJ9xktAT9yDHAvaz R7rDHeakd9aEcdeK/+YFBBkG20WdNb6ea4qghh7ii6fgteamoMYa4ULK0KK7rM8F8kED nbNBvuWz+mMawMtXwyL1gfDmzPRu6KdoSFoLoecqqvH7wbasvjHHe5yogVfxMSPUWMml fBXA== X-Gm-Message-State: AMke39mpBsg2G/ioHB9WPKwipA4dK1PdyOqEuSdjEetvxi+UF117I4RGCMdhfeVEIW7oXQ== X-Received: by 10.223.150.123 with SMTP id c56mr1901444wra.202.1488282739550; Tue, 28 Feb 2017 03:52:19 -0800 (PST) Received: from Solace.fritz.box ([80.66.223.93]) by smtp.gmail.com with ESMTPSA id o50sm1998559wrc.56.2017.02.28.03.52.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 28 Feb 2017 03:52:18 -0800 (PST) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Tue, 28 Feb 2017 12:52:17 +0100 Message-ID: <148828273740.26730.11398473846692932330.stgit@Solace.fritz.box> In-Reply-To: <148828109243.26730.2771577013485070217.stgit@Solace.fritz.box> References: <148828109243.26730.2771577013485070217.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Anshul Makkar Subject: [Xen-devel] [PATCH v3 3/7] xen: credit2: group the runq manipulating functions. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP So that they're all close among each other, and also near to the comment describind the runqueue organization (which is also moved). No functional change intended. Signed-off-by: Dario Faggioli --- Cc: George Dunlap Cc: Anshul Makkar --- Changes from v2: * don't move the 'credit2_runqueue' option parsing code, as suggested during review; --- xen/common/sched_credit2.c | 408 ++++++++++++++++++++++---------------------- 1 file changed, 204 insertions(+), 204 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index c00dcbf..b0ec5f8 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -566,7 +566,7 @@ static int get_fallback_cpu(struct csched2_vcpu *svc) /* * Time-to-credit, credit-to-time. - * + * * We keep track of the "residual" time to make sure that frequent short * schedules still get accounted for in the end. * @@ -587,7 +587,7 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct c } /* - * Runqueue related code + * Runqueue related code. */ static inline int vcpu_on_runq(struct csched2_vcpu *svc) @@ -600,6 +600,208 @@ static inline struct csched2_vcpu * runq_elem(struct list_head *elem) return list_entry(elem, struct csched2_vcpu, runq_elem); } +static void activate_runqueue(struct csched2_private *prv, int rqi) +{ + struct csched2_runqueue_data *rqd; + + rqd = prv->rqd + rqi; + + BUG_ON(!cpumask_empty(&rqd->active)); + + rqd->max_weight = 1; + rqd->id = rqi; + INIT_LIST_HEAD(&rqd->svc); + INIT_LIST_HEAD(&rqd->runq); + spin_lock_init(&rqd->lock); + + __cpumask_set_cpu(rqi, &prv->active_queues); +} + +static void deactivate_runqueue(struct csched2_private *prv, int rqi) +{ + struct csched2_runqueue_data *rqd; + + rqd = prv->rqd + rqi; + + BUG_ON(!cpumask_empty(&rqd->active)); + + rqd->id = -1; + + __cpumask_clear_cpu(rqi, &prv->active_queues); +} + +static inline bool_t same_node(unsigned int cpua, unsigned int cpub) +{ + return cpu_to_node(cpua) == cpu_to_node(cpub); +} + +static inline bool_t same_socket(unsigned int cpua, unsigned int cpub) +{ + return cpu_to_socket(cpua) == cpu_to_socket(cpub); +} + +static inline bool_t same_core(unsigned int cpua, unsigned int cpub) +{ + return same_socket(cpua, cpub) && + cpu_to_core(cpua) == cpu_to_core(cpub); +} + +static unsigned int +cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu) +{ + struct csched2_runqueue_data *rqd; + unsigned int rqi; + + for ( rqi = 0; rqi < nr_cpu_ids; rqi++ ) + { + unsigned int peer_cpu; + + /* + * As soon as we come across an uninitialized runqueue, use it. + * In fact, either: + * - we are initializing the first cpu, and we assign it to + * runqueue 0. This is handy, especially if we are dealing + * with the boot cpu (if credit2 is the default scheduler), + * as we would not be able to use cpu_to_socket() and similar + * helpers anyway (they're result of which is not reliable yet); + * - we have gone through all the active runqueues, and have not + * found anyone whose cpus' topology matches the one we are + * dealing with, so activating a new runqueue is what we want. + */ + if ( prv->rqd[rqi].id == -1 ) + break; + + rqd = prv->rqd + rqi; + BUG_ON(cpumask_empty(&rqd->active)); + + peer_cpu = cpumask_first(&rqd->active); + BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID || + cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID); + + if ( opt_runqueue == OPT_RUNQUEUE_ALL || + (opt_runqueue == OPT_RUNQUEUE_CORE && same_core(peer_cpu, cpu)) || + (opt_runqueue == OPT_RUNQUEUE_SOCKET && same_socket(peer_cpu, cpu)) || + (opt_runqueue == OPT_RUNQUEUE_NODE && same_node(peer_cpu, cpu)) ) + break; + } + + /* We really expect to be able to assign each cpu to a runqueue. */ + BUG_ON(rqi >= nr_cpu_ids); + + return rqi; +} + +/* Find the domain with the highest weight. */ +static void update_max_weight(struct csched2_runqueue_data *rqd, int new_weight, + int old_weight) +{ + /* Try to avoid brute-force search: + * - If new_weight is larger, max_weigth <- new_weight + * - If old_weight != max_weight, someone else is still max_weight + * (No action required) + * - If old_weight == max_weight, brute-force search for max weight + */ + if ( new_weight > rqd->max_weight ) + { + rqd->max_weight = new_weight; + SCHED_STAT_CRANK(upd_max_weight_quick); + } + else if ( old_weight == rqd->max_weight ) + { + struct list_head *iter; + int max_weight = 1; + + list_for_each( iter, &rqd->svc ) + { + struct csched2_vcpu * svc = list_entry(iter, struct csched2_vcpu, rqd_elem); + + if ( svc->weight > max_weight ) + max_weight = svc->weight; + } + + rqd->max_weight = max_weight; + SCHED_STAT_CRANK(upd_max_weight_full); + } + + if ( unlikely(tb_init_done) ) + { + struct { + unsigned rqi:16, max_weight:16; + } d; + d.rqi = rqd->id; + d.max_weight = rqd->max_weight; + __trace_var(TRC_CSCHED2_RUNQ_MAX_WEIGHT, 1, + sizeof(d), + (unsigned char *)&d); + } +} + +/* Add and remove from runqueue assignment (not active run queue) */ +static void +_runq_assign(struct csched2_vcpu *svc, struct csched2_runqueue_data *rqd) +{ + + svc->rqd = rqd; + list_add_tail(&svc->rqd_elem, &svc->rqd->svc); + + update_max_weight(svc->rqd, svc->weight, 0); + + /* Expected new load based on adding this vcpu */ + rqd->b_avgload += svc->avgload; + + if ( unlikely(tb_init_done) ) + { + struct { + unsigned vcpu:16, dom:16; + unsigned rqi:16; + } d; + d.dom = svc->vcpu->domain->domain_id; + d.vcpu = svc->vcpu->vcpu_id; + d.rqi=rqd->id; + __trace_var(TRC_CSCHED2_RUNQ_ASSIGN, 1, + sizeof(d), + (unsigned char *)&d); + } + +} + +static void +runq_assign(const struct scheduler *ops, struct vcpu *vc) +{ + struct csched2_vcpu *svc = vc->sched_priv; + + ASSERT(svc->rqd == NULL); + + _runq_assign(svc, c2rqd(ops, vc->processor)); +} + +static void +_runq_deassign(struct csched2_vcpu *svc) +{ + struct csched2_runqueue_data *rqd = svc->rqd; + + ASSERT(!vcpu_on_runq(svc)); + ASSERT(!(svc->flags & CSFLAG_scheduled)); + + list_del_init(&svc->rqd_elem); + update_max_weight(rqd, 0, svc->weight); + + /* Expected new load based on removing this vcpu */ + rqd->b_avgload = max_t(s_time_t, rqd->b_avgload - svc->avgload, 0); + + svc->rqd = NULL; +} + +static void +runq_deassign(const struct scheduler *ops, struct vcpu *vc) +{ + struct csched2_vcpu *svc = vc->sched_priv; + + ASSERT(svc->rqd == c2rqd(ops, vc->processor)); + + _runq_deassign(svc); +} + /* * Track the runq load by gathering instantaneous load samples, and using * exponentially weighted moving average (EWMA) for the 'decaying'. @@ -1214,51 +1416,6 @@ void burn_credits(struct csched2_runqueue_data *rqd, } } -/* Find the domain with the highest weight. */ -static void update_max_weight(struct csched2_runqueue_data *rqd, int new_weight, - int old_weight) -{ - /* Try to avoid brute-force search: - * - If new_weight is larger, max_weigth <- new_weight - * - If old_weight != max_weight, someone else is still max_weight - * (No action required) - * - If old_weight == max_weight, brute-force search for max weight - */ - if ( new_weight > rqd->max_weight ) - { - rqd->max_weight = new_weight; - SCHED_STAT_CRANK(upd_max_weight_quick); - } - else if ( old_weight == rqd->max_weight ) - { - struct list_head *iter; - int max_weight = 1; - - list_for_each( iter, &rqd->svc ) - { - struct csched2_vcpu * svc = list_entry(iter, struct csched2_vcpu, rqd_elem); - - if ( svc->weight > max_weight ) - max_weight = svc->weight; - } - - rqd->max_weight = max_weight; - SCHED_STAT_CRANK(upd_max_weight_full); - } - - if ( unlikely(tb_init_done) ) - { - struct { - unsigned rqi:16, max_weight:16; - } d; - d.rqi = rqd->id; - d.max_weight = rqd->max_weight; - __trace_var(TRC_CSCHED2_RUNQ_MAX_WEIGHT, 1, - sizeof(d), - (unsigned char *)&d); - } -} - #ifndef NDEBUG static inline void csched2_vcpu_check(struct vcpu *vc) @@ -1323,72 +1480,6 @@ csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd) return svc; } -/* Add and remove from runqueue assignment (not active run queue) */ -static void -_runq_assign(struct csched2_vcpu *svc, struct csched2_runqueue_data *rqd) -{ - - svc->rqd = rqd; - list_add_tail(&svc->rqd_elem, &svc->rqd->svc); - - update_max_weight(svc->rqd, svc->weight, 0); - - /* Expected new load based on adding this vcpu */ - rqd->b_avgload += svc->avgload; - - if ( unlikely(tb_init_done) ) - { - struct { - unsigned vcpu:16, dom:16; - unsigned rqi:16; - } d; - d.dom = svc->vcpu->domain->domain_id; - d.vcpu = svc->vcpu->vcpu_id; - d.rqi=rqd->id; - __trace_var(TRC_CSCHED2_RUNQ_ASSIGN, 1, - sizeof(d), - (unsigned char *)&d); - } - -} - -static void -runq_assign(const struct scheduler *ops, struct vcpu *vc) -{ - struct csched2_vcpu *svc = vc->sched_priv; - - ASSERT(svc->rqd == NULL); - - _runq_assign(svc, c2rqd(ops, vc->processor)); -} - -static void -_runq_deassign(struct csched2_vcpu *svc) -{ - struct csched2_runqueue_data *rqd = svc->rqd; - - ASSERT(!vcpu_on_runq(svc)); - ASSERT(!(svc->flags & CSFLAG_scheduled)); - - list_del_init(&svc->rqd_elem); - update_max_weight(rqd, 0, svc->weight); - - /* Expected new load based on removing this vcpu */ - rqd->b_avgload = max_t(s_time_t, rqd->b_avgload - svc->avgload, 0); - - svc->rqd = NULL; -} - -static void -runq_deassign(const struct scheduler *ops, struct vcpu *vc) -{ - struct csched2_vcpu *svc = vc->sched_priv; - - ASSERT(svc->rqd == c2rqd(ops, vc->processor)); - - _runq_deassign(svc); -} - static void csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) { @@ -2772,97 +2863,6 @@ csched2_dump(const struct scheduler *ops) #undef cpustr } -static void activate_runqueue(struct csched2_private *prv, int rqi) -{ - struct csched2_runqueue_data *rqd; - - rqd = prv->rqd + rqi; - - BUG_ON(!cpumask_empty(&rqd->active)); - - rqd->max_weight = 1; - rqd->id = rqi; - INIT_LIST_HEAD(&rqd->svc); - INIT_LIST_HEAD(&rqd->runq); - spin_lock_init(&rqd->lock); - - __cpumask_set_cpu(rqi, &prv->active_queues); -} - -static void deactivate_runqueue(struct csched2_private *prv, int rqi) -{ - struct csched2_runqueue_data *rqd; - - rqd = prv->rqd + rqi; - - BUG_ON(!cpumask_empty(&rqd->active)); - - rqd->id = -1; - - __cpumask_clear_cpu(rqi, &prv->active_queues); -} - -static inline bool_t same_node(unsigned int cpua, unsigned int cpub) -{ - return cpu_to_node(cpua) == cpu_to_node(cpub); -} - -static inline bool_t same_socket(unsigned int cpua, unsigned int cpub) -{ - return cpu_to_socket(cpua) == cpu_to_socket(cpub); -} - -static inline bool_t same_core(unsigned int cpua, unsigned int cpub) -{ - return same_socket(cpua, cpub) && - cpu_to_core(cpua) == cpu_to_core(cpub); -} - -static unsigned int -cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu) -{ - struct csched2_runqueue_data *rqd; - unsigned int rqi; - - for ( rqi = 0; rqi < nr_cpu_ids; rqi++ ) - { - unsigned int peer_cpu; - - /* - * As soon as we come across an uninitialized runqueue, use it. - * In fact, either: - * - we are initializing the first cpu, and we assign it to - * runqueue 0. This is handy, especially if we are dealing - * with the boot cpu (if credit2 is the default scheduler), - * as we would not be able to use cpu_to_socket() and similar - * helpers anyway (they're result of which is not reliable yet); - * - we have gone through all the active runqueues, and have not - * found anyone whose cpus' topology matches the one we are - * dealing with, so activating a new runqueue is what we want. - */ - if ( prv->rqd[rqi].id == -1 ) - break; - - rqd = prv->rqd + rqi; - BUG_ON(cpumask_empty(&rqd->active)); - - peer_cpu = cpumask_first(&rqd->active); - BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID || - cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID); - - if ( opt_runqueue == OPT_RUNQUEUE_ALL || - (opt_runqueue == OPT_RUNQUEUE_CORE && same_core(peer_cpu, cpu)) || - (opt_runqueue == OPT_RUNQUEUE_SOCKET && same_socket(peer_cpu, cpu)) || - (opt_runqueue == OPT_RUNQUEUE_NODE && same_node(peer_cpu, cpu)) ) - break; - } - - /* We really expect to be able to assign each cpu to a runqueue. */ - BUG_ON(rqi >= nr_cpu_ids); - - return rqi; -} - /* Returns the ID of the runqueue the cpu is assigned to. */ static unsigned init_pdata(struct csched2_private *prv, unsigned int cpu)