From patchwork Thu Feb 2 17:58:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9552837 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6A3E860236 for ; Thu, 2 Feb 2017 18:01:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C27D2839C for ; Thu, 2 Feb 2017 18:01:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 50D7C284A5; Thu, 2 Feb 2017 18:01:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B74072839C for ; Thu, 2 Feb 2017 18:01:10 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cZLeZ-0000RL-Te; Thu, 02 Feb 2017 17:58:39 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cZLeX-0000RF-UT for xen-devel@lists.xenproject.org; Thu, 02 Feb 2017 17:58:38 +0000 Received: from [85.158.137.68] by server-13.bemta-3.messagelabs.com id 0C/B9-25657-D4373985; Thu, 02 Feb 2017 17:58:37 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrFIsWRWlGSWpSXmKPExsXiVRvkqOtTPDn CYNNFBYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNePwBJ2CDYYV73dsZm5gnKPexcjFISQwnVFi z7MmdhCHRWAqq0TTq+NsII6EwEZWiR1H5jN1MXICOTESS+ccYoWwqyTeHp7GAmILCahI3Ny+i gli1E9GiTlLdzGDJIQF9CSOHP3BDmH7SrQeeMQIYrMJGEi82bEXbJCIgJLEvVWTwRYwAw09fn IpWA2LgKpE56EbYHFeAW+Jr8+mgdmiAnISKy+3sELEBSVOznwCdAQHUK+mxPpd+hBj5CW2v53 DPIFRaBaSqlkIVbOQVC1gZF7FqFGcWlSWWqRraKmXVJSZnlGSm5iZo2toYKyXm1pcnJiempOY VKyXnJ+7iREYzvUMDIw7GH8f9zvEKMnBpCTKO0VrcoQQX1J+SmVGYnFGfFFpTmrxIUYZDg4lC V7WIqCcYFFqempFWmYOMLJg0hIcPEoivItA0rzFBYm5xZnpEKlTjLocu3ZdfskkxJKXn5cqJc 5rDFIkAFKUUZoHNwIW5ZcYZaWEeRkZGBiEeApSi3IzS1DlXzGKczAqCfPygUzhycwrgdv0Cug IJqAjfj6eBHJESSJCSqqB0e/NwjcFS1xCZuyqlbO87rssZvK6zfr5kwLmf1Fv/JIe0vX26uKc oBcbIubtFvdovLl//d5d3Cp8nIdW/Pq0j6tB/AnXTqbfXw998fV6WJhWnPm0/581r1yk3cEge waXlvRfCqdaZ9cv15j0TFh986yQUHVG/Vy7hpqW7hWCMillpu6XuPbyKrEUZyQaajEXFScCAF WjT2rtAgAA X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-15.tower-31.messagelabs.com!1486058316!79679474!1 X-Originating-IP: [74.125.82.65] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 17240 invoked from network); 2 Feb 2017 17:58:36 -0000 Received: from mail-wm0-f65.google.com (HELO mail-wm0-f65.google.com) (74.125.82.65) by server-15.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 2 Feb 2017 17:58:36 -0000 Received: by mail-wm0-f65.google.com with SMTP id v77so5591805wmv.0 for ; Thu, 02 Feb 2017 09:58:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:user-agent:mime-version :content-transfer-encoding; bh=LJQfZHgCsoaDMxev4p7NjZa4g95dxy2aYkko9jSz1Yk=; b=EtWpE6pXtDPxtkV/AMN1lY7pekH/CI5eou9Tvefv+wR4C/HhWtkECKtmjD/StG+gwW JskzflMsOKZ2eZBDjg5cp74Xlm7strez1HMrCoCrXCR5NSiiM1qLEmTWFPI4VlZkyc3W yRNGZygUq2l+qiUVI/ld85diQerNP+bvZbaowl/kmcxNaNWUaLSq4BbTZo2X09hcP09U WeB+EpsXclfY5sSM51FAvqH82GpuTP1JKVCvlMue8SHW5BV6+hPiTJ2Ts14xPyTjDDcn SMasi5M16y+I6omXKsJel2SlVrAzuqPKG4CIAq2TPbGeYFVFG5o1mku1YbgH84Nu7YPq IPkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :user-agent:mime-version:content-transfer-encoding; bh=LJQfZHgCsoaDMxev4p7NjZa4g95dxy2aYkko9jSz1Yk=; b=lx7adAJCjs7SdRckLXWhgVHbjxAX0OGi5jS7al4UYZ+d51jZaJgkB1wV+PXWqd1wWy KMR9Z+O/Ubc8h1Wz85oY8z9Ke40nzDK5aKlz9THoplVzfoefJ3U0M9dVdc0kmp84Rhle OVVou4zeKPogbPjyh84VpXRmi1ZzWhOgDp3kfK+7lSZcjQclOuyqWwUwqUSptV+ERP8t 1suEd6Ri7BnbqrSO0ZZ0pfOyvLnZraniwHSoj5H/nsik0aCkj6ZamPY8T9v0m80vezwj qdRbp6Qv7KnqCQ2mI3zAZtDgZ93IINv5tCmPzXC2F+DoUwXJXHqoBUqFcyZbTo1eQ9Cr U/HQ== X-Gm-Message-State: AIkVDXIr39OEMW2re/l7nQu2z2DIzqzG3MO++0bQjtL3KU5mqJ/XlN5pZM9cOJ36zcL4Sg== X-Received: by 10.28.109.70 with SMTP id i67mr27556445wmc.102.1486058315869; Thu, 02 Feb 2017 09:58:35 -0800 (PST) Received: from Solace.fritz.box ([80.66.223.139]) by smtp.gmail.com with ESMTPSA id c9sm4349205wmi.16.2017.02.02.09.58.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Feb 2017 09:58:34 -0800 (PST) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Thu, 02 Feb 2017 18:58:28 +0100 Message-ID: <148605830789.27525.6816246611792459648.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Anshul Makkar , Meng Xu Subject: [Xen-devel] [PATCH v2] xen: sched: harmonize debug dump output among schedulers. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Information we currently print for idle pCPUs is rather useless. Credit2 already stopped showing that, do the same for Credit and RTDS. Also, define a new CPU status dump hook, which is not defined by those schedulers which already dump such info in other ways (e.g., Credit2, which does that while dumping runqueue information). This also means that, still in Credit2, we can keep the runqueue and pCPU info closer together. Signed-off-by: Dario Faggioli Acked-by: Meng Xu --- Cc: George Dunlap Cc: Anshul Makkar --- This is basically the rebase of "xen: sched: improve debug dump output.", on top of "xen: credit2: improve debug dump output." (i.e., commit 3af86727b8204). Sorry again, George, for the mess... I was sure I hadn't sent the first one out yet, when I sended it out what turned out to be the second time (and, even worse, slightly reworked! :-( ). I'm keeping Meng's ack, as I did not touch the RTDS part, wrt the patch he sent it against. --- xen/common/sched_credit.c | 6 +++--- xen/common/sched_credit2.c | 34 +++++++++++----------------------- xen/common/sched_rt.c | 9 ++++++++- xen/common/schedule.c | 8 ++++---- 4 files changed, 26 insertions(+), 31 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index ad20819..7c0ff47 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1988,13 +1988,13 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) runq = &spc->runq; cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask, cpu)); - printk(" sort=%d, sibling=%s, ", spc->runq_sort_last, cpustr); + printk("CPU[%02d] sort=%d, sibling=%s, ", cpu, spc->runq_sort_last, cpustr); cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu)); printk("core=%s\n", cpustr); - /* current VCPU */ + /* current VCPU (nothing to say if that's the idle vcpu). */ svc = CSCHED_VCPU(curr_on_cpu(cpu)); - if ( svc ) + if ( svc && !is_idle_vcpu(svc->vcpu) ) { printk("\trun: "); csched_dump_vcpu(svc); diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 93c6d32..9f5a190 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -2627,28 +2627,15 @@ csched2_dump_vcpu(struct csched2_private *prv, struct csched2_vcpu *svc) printk("\n"); } -static void -csched2_dump_pcpu(const struct scheduler *ops, int cpu) +static inline void +dump_pcpu(const struct scheduler *ops, int cpu) { struct csched2_private *prv = CSCHED2_PRIV(ops); struct csched2_vcpu *svc; - unsigned long flags; - spinlock_t *lock; #define cpustr keyhandler_scratch - /* - * We need both locks: - * - we print current, so we need the runqueue lock for this - * cpu (the one of the runqueue this cpu is associated to); - * - csched2_dump_vcpu() wants to access domains' weights, - * which are protected by the private scheduler lock. - */ - read_lock_irqsave(&prv->lock, flags); - lock = per_cpu(schedule_data, cpu).schedule_lock; - spin_lock(lock); - cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask, cpu)); - printk(" runq=%d, sibling=%s, ", c2r(ops, cpu), cpustr); + printk("CPU[%02d] runq=%d, sibling=%s, ", cpu, c2r(ops, cpu), cpustr); cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu)); printk("core=%s\n", cpustr); @@ -2659,9 +2646,6 @@ csched2_dump_pcpu(const struct scheduler *ops, int cpu) printk("\trun: "); csched2_dump_vcpu(prv, svc); } - - spin_unlock(lock); - read_unlock_irqrestore(&prv->lock, flags); #undef cpustr } @@ -2671,7 +2655,7 @@ csched2_dump(const struct scheduler *ops) struct list_head *iter_sdom; struct csched2_private *prv = CSCHED2_PRIV(ops); unsigned long flags; - int i, loop; + unsigned int i, j, loop; #define cpustr keyhandler_scratch /* @@ -2741,7 +2725,6 @@ csched2_dump(const struct scheduler *ops) } } - printk("Runqueue info:\n"); for_each_cpu(i, &prv->active_queues) { struct csched2_runqueue_data *rqd = prv->rqd + i; @@ -2750,7 +2733,13 @@ csched2_dump(const struct scheduler *ops) /* We need the lock to scan the runqueue. */ spin_lock(&rqd->lock); - printk("runqueue %d:\n", i); + + printk("Runqueue %d:\n", i); + + for_each_cpu(j, &rqd->active) + dump_pcpu(ops, j); + + printk("RUNQ:\n"); list_for_each( iter, runq ) { struct csched2_vcpu *svc = __runq_elem(iter); @@ -3108,7 +3097,6 @@ static const struct scheduler sched_credit2_def = { .do_schedule = csched2_schedule, .context_saved = csched2_context_saved, - .dump_cpu_state = csched2_dump_pcpu, .dump_settings = csched2_dump, .init = csched2_init, .deinit = csched2_deinit, diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 24b4b22..f2d979c 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -320,10 +320,17 @@ static void rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv = rt_priv(ops); + struct rt_vcpu *svc; unsigned long flags; spin_lock_irqsave(&prv->lock, flags); - rt_dump_vcpu(ops, rt_vcpu(curr_on_cpu(cpu))); + printk("CPU[%02d]\n", cpu); + /* current VCPU (nothing to say if that's the idle vcpu). */ + svc = rt_vcpu(curr_on_cpu(cpu)); + if ( svc && !is_idle_vcpu(svc->vcpu) ) + { + rt_dump_vcpu(ops, svc); + } spin_unlock_irqrestore(&prv->lock, flags); } diff --git a/xen/common/schedule.c b/xen/common/schedule.c index ed77990..e4320f3 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -1844,11 +1844,11 @@ void schedule_dump(struct cpupool *c) cpus = &cpupool_free_cpus; } - printk("CPUs info:\n"); - for_each_cpu (i, cpus) + if ( sched->dump_cpu_state != NULL ) { - printk("CPU[%02d] ", i); - SCHED_OP(sched, dump_cpu_state, i); + printk("CPUs info:\n"); + for_each_cpu (i, cpus) + SCHED_OP(sched, dump_cpu_state, i); } }