From patchwork Wed Mar 1 14:53:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9598539 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1A565604DC for ; Wed, 1 Mar 2017 14:55:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 10473223A6 for ; Wed, 1 Mar 2017 14:55:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0535628564; Wed, 1 Mar 2017 14:55:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 38608223A6 for ; Wed, 1 Mar 2017 14:55:36 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cj5d3-00068a-1V; Wed, 01 Mar 2017 14:53:21 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cj5d2-00068B-Ao for xen-devel@lists.xenproject.org; Wed, 01 Mar 2017 14:53:20 +0000 Received: from [193.109.254.147] by server-9.bemta-6.messagelabs.com id DA/C3-13095-F50E6B85; Wed, 01 Mar 2017 14:53:19 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmphleJIrShJLcpLzFFi42K5GNpwWDfuwbY Ig023bCy+b5nM5MDocfjDFZYAxijWzLyk/IoE1oye+1tYCpbmVPzZcZetgfFyYBcjF4eQwAxG ifufzrGAOCwCa1glbm29zgriSAhcYpW43tfB2MXICeTESCzff5odwq6W2P/jFAuILSSgInFz+ yomiFHfGSVudz1kA0kIC+hJHDn6gx3C9pGY/H4nE4jNJmAg8WbHXlYQW0RASeLeqslgcWaBKI kzy5uZQWwWAVWJhhMXwWp4Bbwlnn2eCjaHE2jO6ZWLgeZzAC3zlvizzhUkLCogJ7HycgtUuaD EyZlPWEBKmAU0Jdbv0oeYLi+x/e0c5gmMIrOQVM1CqJqFpGoBI/MqRo3i1KKy1CJdI2O9pKLM 9IyS3MTMHF1DAzO93NTi4sT01JzEpGK95PzcTYzA8GcAgh2Mf+YHHmKU5GBSEuXdvWpbhBBfU n5KZUZicUZ8UWlOavEhRg0ODoEJZ+dOZ5JiycvPS1WS4I2+D1QnWJSanlqRlpkDjFCYUgkOHi UR3vUgad7igsTc4sx0iNQpRmOOB6d2vWHi+NR/+A2TENgkKXHeW/eASgVASjNK8+AGwRLHJUZ ZKWFeRqAzhXgKUotyM0tQ5V8xinMwKgnzcoIs5MnMK4Hb9wroFCagU16obAU5pSQRISXVwKhu 8/60a9xcMa2FOgqyO16d4nApjlCOuDHridinY5qXxN/XMdzkLJwgP83ReImj3ZHlho8fLfL7L XiNcYvWfD6v3v29LKnqytNCjHUcuyNTz/HlJrlfOH46mO9AW271+zKGh6e79P2XcSX2+/YqvI +/wiW86cAv7+13fwdUaa3K3Pjk4YkSTyWW4oxEQy3mouJEAD2GowAXAwAA X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-13.tower-27.messagelabs.com!1488379998!80029023!1 X-Originating-IP: [209.85.128.195] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 43866 invoked from network); 1 Mar 2017 14:53:18 -0000 Received: from mail-wr0-f195.google.com (HELO mail-wr0-f195.google.com) (209.85.128.195) by server-13.tower-27.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 1 Mar 2017 14:53:18 -0000 Received: by mail-wr0-f195.google.com with SMTP id g10so5847979wrg.0 for ; Wed, 01 Mar 2017 06:53:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=7XluWiAB1sCRsO7P8fMXZkk987qwFKFohW2vyxd/2NY=; b=XTWk/ie+D1aOh0Cq+dt4FoP7sRHbYMkf40XMA1lMzGrDKZGgVQrJ2ve0zC0I9AI3eJ 4wbRsaA2P7iCMymJO8hzRBtvP+LZEV2/3G1+dH/q0+P2+JBWCRYhxywwmqhpDiGxflYL BaTXijmU4xxO/4T+hPVS7OKG8M/Mz/MSny07uEjR8vj5kgnpPzG/BpZCxskeHHmsZXmX DSMuI+DAID+yIvLTP5nmJKSOVAv0NXyQ4+6fhgT0LQ4W6aG+LORhI6RbODujdhZtM7/0 mTEd0ypVwhkNEBq3wCo+BpnsABKennYkuuqK8f7GP5nGR7x40CG/lYgEIQVrHdWx1EKv UxwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=7XluWiAB1sCRsO7P8fMXZkk987qwFKFohW2vyxd/2NY=; b=pS1wBtwDrgmaAWh3OxD4j6/8ea3ijgthWcsNgFwLExwzC/n3/f5R+PIC4Twl/22PKR fbd9hLQI96oY6Fqd1QXQBL8kxDG+fiExU70BgDl+Wn3cjo6qCy9EyXyD1psZTqPYvdv4 dqh0mPFROFUSORKdS5Lv24cAeyazkZ7lwStZCcQJbHgOXyEvQeQoUfTIQMOtZF6Q4zzD bZXAqALgCtpGwTiY2x7vZCxP6Yk8ieqpxqUSp6EooTOIPXFfQqiqyC8g380itXzcK84o ZDu32tC6RwlodUWlZF30d3kL7usfdqRViUj5y68nIm+BPT05P7MarZexHADIUI9QTjkA Pq7Q== X-Gm-Message-State: AMke39m70J0Sy7cH2KN1guIXUL2UjUyAKYN7WuGqUWlEKOwjrOtFV5Ks0uu1xYBrAI3cyg== X-Received: by 10.223.139.220 with SMTP id w28mr7878827wra.172.1488379998487; Wed, 01 Mar 2017 06:53:18 -0800 (PST) Received: from Solace.fritz.box ([80.66.223.93]) by smtp.gmail.com with ESMTPSA id 63sm5595921wmp.9.2017.03.01.06.53.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 01 Mar 2017 06:53:17 -0800 (PST) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Wed, 01 Mar 2017 15:53:16 +0100 Message-ID: <148837999639.11900.4722802322841206096.stgit@Solace.fritz.box> In-Reply-To: <148837861276.11900.8292677471375175885.stgit@Solace.fritz.box> References: <148837861276.11900.8292677471375175885.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Anshul Makkar Subject: [Xen-devel] [PATCH v4 5/7] xen: credit2: group the runq manipulating functions. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP So that they're all close among each other, and also near to the comment describing the runqueue organization (which is also moved). No functional change intended. Signed-off-by: Dario Faggioli Acked-by: George Dunlap --- Cc: George Dunlap Cc: Anshul Makkar --- Changes from v3: * fix a typo in the changelog. Changes from v2: * don't move the 'credit2_runqueue' option parsing code, as suggested during review; --- xen/common/sched_credit2.c | 408 ++++++++++++++++++++++---------------------- 1 file changed, 204 insertions(+), 204 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 1f57239..66b7f96 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -569,7 +569,7 @@ static int get_fallback_cpu(struct csched2_vcpu *svc) /* * Time-to-credit, credit-to-time. - * + * * We keep track of the "residual" time to make sure that frequent short * schedules still get accounted for in the end. * @@ -590,7 +590,7 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct c } /* - * Runqueue related code + * Runqueue related code. */ static inline int vcpu_on_runq(struct csched2_vcpu *svc) @@ -603,6 +603,208 @@ static inline struct csched2_vcpu * runq_elem(struct list_head *elem) return list_entry(elem, struct csched2_vcpu, runq_elem); } +static void activate_runqueue(struct csched2_private *prv, int rqi) +{ + struct csched2_runqueue_data *rqd; + + rqd = prv->rqd + rqi; + + BUG_ON(!cpumask_empty(&rqd->active)); + + rqd->max_weight = 1; + rqd->id = rqi; + INIT_LIST_HEAD(&rqd->svc); + INIT_LIST_HEAD(&rqd->runq); + spin_lock_init(&rqd->lock); + + __cpumask_set_cpu(rqi, &prv->active_queues); +} + +static void deactivate_runqueue(struct csched2_private *prv, int rqi) +{ + struct csched2_runqueue_data *rqd; + + rqd = prv->rqd + rqi; + + BUG_ON(!cpumask_empty(&rqd->active)); + + rqd->id = -1; + + __cpumask_clear_cpu(rqi, &prv->active_queues); +} + +static inline bool_t same_node(unsigned int cpua, unsigned int cpub) +{ + return cpu_to_node(cpua) == cpu_to_node(cpub); +} + +static inline bool_t same_socket(unsigned int cpua, unsigned int cpub) +{ + return cpu_to_socket(cpua) == cpu_to_socket(cpub); +} + +static inline bool_t same_core(unsigned int cpua, unsigned int cpub) +{ + return same_socket(cpua, cpub) && + cpu_to_core(cpua) == cpu_to_core(cpub); +} + +static unsigned int +cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu) +{ + struct csched2_runqueue_data *rqd; + unsigned int rqi; + + for ( rqi = 0; rqi < nr_cpu_ids; rqi++ ) + { + unsigned int peer_cpu; + + /* + * As soon as we come across an uninitialized runqueue, use it. + * In fact, either: + * - we are initializing the first cpu, and we assign it to + * runqueue 0. This is handy, especially if we are dealing + * with the boot cpu (if credit2 is the default scheduler), + * as we would not be able to use cpu_to_socket() and similar + * helpers anyway (they're result of which is not reliable yet); + * - we have gone through all the active runqueues, and have not + * found anyone whose cpus' topology matches the one we are + * dealing with, so activating a new runqueue is what we want. + */ + if ( prv->rqd[rqi].id == -1 ) + break; + + rqd = prv->rqd + rqi; + BUG_ON(cpumask_empty(&rqd->active)); + + peer_cpu = cpumask_first(&rqd->active); + BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID || + cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID); + + if ( opt_runqueue == OPT_RUNQUEUE_ALL || + (opt_runqueue == OPT_RUNQUEUE_CORE && same_core(peer_cpu, cpu)) || + (opt_runqueue == OPT_RUNQUEUE_SOCKET && same_socket(peer_cpu, cpu)) || + (opt_runqueue == OPT_RUNQUEUE_NODE && same_node(peer_cpu, cpu)) ) + break; + } + + /* We really expect to be able to assign each cpu to a runqueue. */ + BUG_ON(rqi >= nr_cpu_ids); + + return rqi; +} + +/* Find the domain with the highest weight. */ +static void update_max_weight(struct csched2_runqueue_data *rqd, int new_weight, + int old_weight) +{ + /* Try to avoid brute-force search: + * - If new_weight is larger, max_weigth <- new_weight + * - If old_weight != max_weight, someone else is still max_weight + * (No action required) + * - If old_weight == max_weight, brute-force search for max weight + */ + if ( new_weight > rqd->max_weight ) + { + rqd->max_weight = new_weight; + SCHED_STAT_CRANK(upd_max_weight_quick); + } + else if ( old_weight == rqd->max_weight ) + { + struct list_head *iter; + int max_weight = 1; + + list_for_each( iter, &rqd->svc ) + { + struct csched2_vcpu * svc = list_entry(iter, struct csched2_vcpu, rqd_elem); + + if ( svc->weight > max_weight ) + max_weight = svc->weight; + } + + rqd->max_weight = max_weight; + SCHED_STAT_CRANK(upd_max_weight_full); + } + + if ( unlikely(tb_init_done) ) + { + struct { + unsigned rqi:16, max_weight:16; + } d; + d.rqi = rqd->id; + d.max_weight = rqd->max_weight; + __trace_var(TRC_CSCHED2_RUNQ_MAX_WEIGHT, 1, + sizeof(d), + (unsigned char *)&d); + } +} + +/* Add and remove from runqueue assignment (not active run queue) */ +static void +_runq_assign(struct csched2_vcpu *svc, struct csched2_runqueue_data *rqd) +{ + + svc->rqd = rqd; + list_add_tail(&svc->rqd_elem, &svc->rqd->svc); + + update_max_weight(svc->rqd, svc->weight, 0); + + /* Expected new load based on adding this vcpu */ + rqd->b_avgload += svc->avgload; + + if ( unlikely(tb_init_done) ) + { + struct { + unsigned vcpu:16, dom:16; + unsigned rqi:16; + } d; + d.dom = svc->vcpu->domain->domain_id; + d.vcpu = svc->vcpu->vcpu_id; + d.rqi=rqd->id; + __trace_var(TRC_CSCHED2_RUNQ_ASSIGN, 1, + sizeof(d), + (unsigned char *)&d); + } + +} + +static void +runq_assign(const struct scheduler *ops, struct vcpu *vc) +{ + struct csched2_vcpu *svc = vc->sched_priv; + + ASSERT(svc->rqd == NULL); + + _runq_assign(svc, c2rqd(ops, vc->processor)); +} + +static void +_runq_deassign(struct csched2_vcpu *svc) +{ + struct csched2_runqueue_data *rqd = svc->rqd; + + ASSERT(!vcpu_on_runq(svc)); + ASSERT(!(svc->flags & CSFLAG_scheduled)); + + list_del_init(&svc->rqd_elem); + update_max_weight(rqd, 0, svc->weight); + + /* Expected new load based on removing this vcpu */ + rqd->b_avgload = max_t(s_time_t, rqd->b_avgload - svc->avgload, 0); + + svc->rqd = NULL; +} + +static void +runq_deassign(const struct scheduler *ops, struct vcpu *vc) +{ + struct csched2_vcpu *svc = vc->sched_priv; + + ASSERT(svc->rqd == c2rqd(ops, vc->processor)); + + _runq_deassign(svc); +} + /* * Track the runq load by gathering instantaneous load samples, and using * exponentially weighted moving average (EWMA) for the 'decaying'. @@ -1234,51 +1436,6 @@ void burn_credits(struct csched2_runqueue_data *rqd, } } -/* Find the domain with the highest weight. */ -static void update_max_weight(struct csched2_runqueue_data *rqd, int new_weight, - int old_weight) -{ - /* Try to avoid brute-force search: - * - If new_weight is larger, max_weigth <- new_weight - * - If old_weight != max_weight, someone else is still max_weight - * (No action required) - * - If old_weight == max_weight, brute-force search for max weight - */ - if ( new_weight > rqd->max_weight ) - { - rqd->max_weight = new_weight; - SCHED_STAT_CRANK(upd_max_weight_quick); - } - else if ( old_weight == rqd->max_weight ) - { - struct list_head *iter; - int max_weight = 1; - - list_for_each( iter, &rqd->svc ) - { - struct csched2_vcpu * svc = list_entry(iter, struct csched2_vcpu, rqd_elem); - - if ( svc->weight > max_weight ) - max_weight = svc->weight; - } - - rqd->max_weight = max_weight; - SCHED_STAT_CRANK(upd_max_weight_full); - } - - if ( unlikely(tb_init_done) ) - { - struct { - unsigned rqi:16, max_weight:16; - } d; - d.rqi = rqd->id; - d.max_weight = rqd->max_weight; - __trace_var(TRC_CSCHED2_RUNQ_MAX_WEIGHT, 1, - sizeof(d), - (unsigned char *)&d); - } -} - #ifndef NDEBUG static inline void csched2_vcpu_check(struct vcpu *vc) @@ -1343,72 +1500,6 @@ csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd) return svc; } -/* Add and remove from runqueue assignment (not active run queue) */ -static void -_runq_assign(struct csched2_vcpu *svc, struct csched2_runqueue_data *rqd) -{ - - svc->rqd = rqd; - list_add_tail(&svc->rqd_elem, &svc->rqd->svc); - - update_max_weight(svc->rqd, svc->weight, 0); - - /* Expected new load based on adding this vcpu */ - rqd->b_avgload += svc->avgload; - - if ( unlikely(tb_init_done) ) - { - struct { - unsigned vcpu:16, dom:16; - unsigned rqi:16; - } d; - d.dom = svc->vcpu->domain->domain_id; - d.vcpu = svc->vcpu->vcpu_id; - d.rqi=rqd->id; - __trace_var(TRC_CSCHED2_RUNQ_ASSIGN, 1, - sizeof(d), - (unsigned char *)&d); - } - -} - -static void -runq_assign(const struct scheduler *ops, struct vcpu *vc) -{ - struct csched2_vcpu *svc = vc->sched_priv; - - ASSERT(svc->rqd == NULL); - - _runq_assign(svc, c2rqd(ops, vc->processor)); -} - -static void -_runq_deassign(struct csched2_vcpu *svc) -{ - struct csched2_runqueue_data *rqd = svc->rqd; - - ASSERT(!vcpu_on_runq(svc)); - ASSERT(!(svc->flags & CSFLAG_scheduled)); - - list_del_init(&svc->rqd_elem); - update_max_weight(rqd, 0, svc->weight); - - /* Expected new load based on removing this vcpu */ - rqd->b_avgload = max_t(s_time_t, rqd->b_avgload - svc->avgload, 0); - - svc->rqd = NULL; -} - -static void -runq_deassign(const struct scheduler *ops, struct vcpu *vc) -{ - struct csched2_vcpu *svc = vc->sched_priv; - - ASSERT(svc->rqd == c2rqd(ops, vc->processor)); - - _runq_deassign(svc); -} - static void csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) { @@ -2794,97 +2885,6 @@ csched2_dump(const struct scheduler *ops) #undef cpustr } -static void activate_runqueue(struct csched2_private *prv, int rqi) -{ - struct csched2_runqueue_data *rqd; - - rqd = prv->rqd + rqi; - - BUG_ON(!cpumask_empty(&rqd->active)); - - rqd->max_weight = 1; - rqd->id = rqi; - INIT_LIST_HEAD(&rqd->svc); - INIT_LIST_HEAD(&rqd->runq); - spin_lock_init(&rqd->lock); - - __cpumask_set_cpu(rqi, &prv->active_queues); -} - -static void deactivate_runqueue(struct csched2_private *prv, int rqi) -{ - struct csched2_runqueue_data *rqd; - - rqd = prv->rqd + rqi; - - BUG_ON(!cpumask_empty(&rqd->active)); - - rqd->id = -1; - - __cpumask_clear_cpu(rqi, &prv->active_queues); -} - -static inline bool_t same_node(unsigned int cpua, unsigned int cpub) -{ - return cpu_to_node(cpua) == cpu_to_node(cpub); -} - -static inline bool_t same_socket(unsigned int cpua, unsigned int cpub) -{ - return cpu_to_socket(cpua) == cpu_to_socket(cpub); -} - -static inline bool_t same_core(unsigned int cpua, unsigned int cpub) -{ - return same_socket(cpua, cpub) && - cpu_to_core(cpua) == cpu_to_core(cpub); -} - -static unsigned int -cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu) -{ - struct csched2_runqueue_data *rqd; - unsigned int rqi; - - for ( rqi = 0; rqi < nr_cpu_ids; rqi++ ) - { - unsigned int peer_cpu; - - /* - * As soon as we come across an uninitialized runqueue, use it. - * In fact, either: - * - we are initializing the first cpu, and we assign it to - * runqueue 0. This is handy, especially if we are dealing - * with the boot cpu (if credit2 is the default scheduler), - * as we would not be able to use cpu_to_socket() and similar - * helpers anyway (they're result of which is not reliable yet); - * - we have gone through all the active runqueues, and have not - * found anyone whose cpus' topology matches the one we are - * dealing with, so activating a new runqueue is what we want. - */ - if ( prv->rqd[rqi].id == -1 ) - break; - - rqd = prv->rqd + rqi; - BUG_ON(cpumask_empty(&rqd->active)); - - peer_cpu = cpumask_first(&rqd->active); - BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID || - cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID); - - if ( opt_runqueue == OPT_RUNQUEUE_ALL || - (opt_runqueue == OPT_RUNQUEUE_CORE && same_core(peer_cpu, cpu)) || - (opt_runqueue == OPT_RUNQUEUE_SOCKET && same_socket(peer_cpu, cpu)) || - (opt_runqueue == OPT_RUNQUEUE_NODE && same_node(peer_cpu, cpu)) ) - break; - } - - /* We really expect to be able to assign each cpu to a runqueue. */ - BUG_ON(rqi >= nr_cpu_ids); - - return rqi; -} - /* Returns the ID of the runqueue the cpu is assigned to. */ static unsigned init_pdata(struct csched2_private *prv, unsigned int cpu)