From patchwork Wed Jan 18 01:10:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9522385 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DD985601C3 for ; Wed, 18 Jan 2017 01:12:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CAD1A2856F for ; Wed, 18 Jan 2017 01:12:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BD7E828578; Wed, 18 Jan 2017 01:12:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9838F2856F for ; Wed, 18 Jan 2017 01:12:44 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cTelY-0001iG-9b; Wed, 18 Jan 2017 01:10:20 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cTelX-0001i6-8W for xen-devel@lists.xenproject.org; Wed, 18 Jan 2017 01:10:19 +0000 Received: from [85.158.139.211] by server-13.bemta-5.messagelabs.com id 2A/34-18129-A70CE785; Wed, 18 Jan 2017 01:10:18 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKIsWRWlGSWpSXmKPExsXiVRvkolt5oC7 CYPpuMYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNeP90Y8sBfMMK5acXcjcwLhGrYuRi0NIYDqj xJFv55lAHBaBqawSi6ZNYAdxJAQ2skrs7znF2sXICeTESHQtX8gGYVdITJr1ngnEFhJQkbi5f RWUPYtJYudFRxBbWEBP4sjRH0CDOIBsS4m1H5RBwmwCBhJvduwFGykioCRxb9VksFZmgSiJM8 ubmUFsFgFVick3zoLFeQW8Ja7NWcMIYosKyEmsvNzCChEXlDg58wkLyHhmAU2J9bv0IcbIS2x /O4d5AqPQLCRVsxCqZiGpWsDIvIpRozi1qCy1SNfIRC+pKDM9oyQ3MTNH19DAVC83tbg4MT01 JzGpWC85P3cTIzCY6xkYGHcw3pzsd4hRkoNJSZS343FthBBfUn5KZUZicUZ8UWlOavEhRg0OD oEJZ+dOZ5JiycvPS1WS4O3cXxchJFiUmp5akZaZA4w3mFIJDh4lEd5wkDRvcUFibnFmOkTqFK Mux65dl18yCYHNkBLnFQUpEgApyijNgxsBi/1LjLJSwryMDAwMQjwFqUW5mSWo8q8YxTkYlYR 5VUGm8GTmlcBtegV0BBPQEdd1qkGOKElESEk1MO5O3Z7pIc/18eorL8PJNy28bd+omE7jfCX2 us9Rvs0i/ID2jpWMqsFaF/n6v2Sm3GuY7aYV1WdWX1a/a1J79WXZ5//f60656Jm8eppxmIaM7 aHKOQybpx388J7HdUPToj2OFhWbjT0EpdvvL6vcopW5tFDDLpppVoD2tg/rry37xyhd+rH0oR JLcUaioRZzUXEiALQ+DAP4AgAA X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-11.tower-206.messagelabs.com!1484701817!62659275!1 X-Originating-IP: [74.125.82.68] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 46593 invoked from network); 18 Jan 2017 01:10:17 -0000 Received: from mail-wm0-f68.google.com (HELO mail-wm0-f68.google.com) (74.125.82.68) by server-11.tower-206.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 18 Jan 2017 01:10:17 -0000 Received: by mail-wm0-f68.google.com with SMTP id r126so212571wmr.3 for ; Tue, 17 Jan 2017 17:10:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:user-agent:mime-version :content-transfer-encoding; bh=knRILYgWn9vzOHZl+QcGNAIqtJ+e5ozQRxRjmiuYvp4=; b=QOwQmq0N6D9ZPg//KNUXTmEftUN0sK8fUC7jTAMwU4PVw4bnHDp7BXRZGneAe7mS0P mLNUlqVZ88stY3yKERKJ9XLqdwixyyZFK1iBoeKt30bN6HS/P5u7rOXjDyTyG92sgxni wTBpu6ocEPbKHSdiNBa/odwl6aTOyzThWd3/T9zKpPhbltwA0xVF5ABGGxW2hiGcUXy1 fqNi9ohrx4z3QjLeWiJvs1RN1NZpwbSwgO6D6bBUIh4M49/AAM5bO+gLCkpPy9D+RBDZ wI0XfAO6FnrgS0ja0QaDtb+u+M1T7cr6zGUrrBQU4jzas1gDLTYCz/2KkSGNREKxKjAp c8RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :user-agent:mime-version:content-transfer-encoding; bh=knRILYgWn9vzOHZl+QcGNAIqtJ+e5ozQRxRjmiuYvp4=; b=VSKNCDA1/xb9syloQw0IuevMXPH323lq+8gVqJLAJEyMDfw3wfFGJUeVvPXFfQwet/ bWD+UBfit9/oUCF8WxMRkqPpHiopYLGSi/52+ATbIIKiLrgH5+WSZ13fmIi0jA/AC+yT XvnjdTkVaI10JmZErYqOb9GNqU+ypsv7vrL4C2di7/nwbFCgO95LOjVjDK4E3ksFwYFh 9zabxnQmRPBUqq2x4ZxbND/heWSfdzm0IRIPjSaEHH5f8haw9IU84fN24V9nhzvl0+nj Ygt73Tkz6r/xvsVUI3kWi0czgzcxgEMfkpFxV+IPzf5cvWkpGMIDYaeykMCnI/GZ+lhm ft0g== X-Gm-Message-State: AIkVDXJNZIwBw+E4tqXsYltINCCFVW6cE7B++n+gBBBTZrgdehKQN2USRhgKJkTlpJ261g== X-Received: by 10.223.131.99 with SMTP id 90mr376616wrd.146.1484701816753; Tue, 17 Jan 2017 17:10:16 -0800 (PST) Received: from Solace.fritz.box (58-209-66-80.hosts.abilene.it. [80.66.209.58]) by smtp.gmail.com with ESMTPSA id y145sm41007943wmc.17.2017.01.17.17.10.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 17 Jan 2017 17:10:15 -0800 (PST) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Wed, 18 Jan 2017 02:10:14 +0100 Message-ID: <148470181453.5815.10454358470087815699.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Anshul Makkar Subject: [Xen-devel] [PATCH] xen: credit2: improve debug dump output. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Scheduling information debug dump for Credit2 is hard to read as it contains the same information repeated multiple time in different ways. In fact, in Credit2, CPUs are grouped in runqueus. Before this change, for each CPU, we were printing the while content of the runqueue, as shown below: CPU[00] sibling=000003, core=0000ff run: [32767.0] flags=0 cpu=0 credit=-1073741824 [w=0] load=0 (~0%) 1: [0.0] flags=0 cpu=2 credit=3860932 [w=256] load=262144 (~100%) 2: [0.1] flags=0 cpu=2 credit=3859906 [w=256] load=262144 (~100%) CPU[01] sibling=000003, core=0000ff run: [32767.1] flags=0 cpu=1 credit=-1073741824 [w=0] load=0 (~0%) 1: [0.0] flags=0 cpu=2 credit=2859840 [w=256] load=262144 (~100%) 2: [0.3] flags=0 cpu=2 credit=-17466062 [w=256] load=262144 (~100%) CPU[02] sibling=00000c, core=0000ff run: [0.0] flags=2 cpu=2 credit=1858628 [w=256] load=262144 (~100%) 1: [0.3] flags=0 cpu=2 credit=-17466062 [w=256] load=262144 (~100%) 2: [0.1] flags=0 cpu=2 credit=-23957055 [w=256] load=262144 (~100%) CPU[03] sibling=00000c, core=0000ff run: [32767.3] flags=0 cpu=3 credit=-1073741824 [w=0] load=0 (~0%) 1: [0.1] flags=0 cpu=2 credit=-3957055 [w=256] load=262144 (~100%) 2: [0.0] flags=0 cpu=2 credit=-6216254 [w=256] load=262144 (~100%) CPU[04] sibling=000030, core=0000ff run: [32767.4] flags=0 cpu=4 credit=-1073741824 [w=0] load=0 (~0%) 1: [0.1] flags=0 cpu=2 credit=3782667 [w=256] load=262144 (~100%) 2: [0.3] flags=0 cpu=2 credit=-16287483 [w=256] load=262144 (~100%) As it can be seen, all the CPUs print the whole content of the runqueue they belong to, at the time of their sampling, and this is cumbersome and hard to interpret! In new output format we print, for each CPU, only the vCPU that is running there (if that's not the idle vCPU, in which case, nothing is printed), while the runqueues content is printed only once, in a dedicated section. An example: CPUs info: CPU[02] runq=0, sibling=00000c, core=0000ff run: [0.3] flags=2 cpu=2 credit=8054391 [w=256] load=262144 (~100%) CPU[14] runq=1, sibling=00c000, core=00ff00 run: [0.4] flags=2 cpu=14 credit=8771420 [w=256] load=262144 (~100%) ... ... ... ... ... ... ... ... ... Runqueue info: runqueue 0: 0: [0.1] flags=0 cpu=2 credit=7869771 [w=256] load=262144 (~100%) 1: [0.0] flags=0 cpu=2 credit=7709649 [w=256] load=262144 (~100%) runqueue 1: 0: [0.5] flags=0 cpu=14 credit=-1188 [w=256] load=262144 (~100%) Note that there still is risk of inconsistency between what is printed in the 'Runqueue info:' and in 'CPUs info:' sections. That is unavoidable, as the relevant locks are released and re-acquired, around each single operation. At least, the inconsistency is less severe than before. Signed-off-by: Dario Faggioli Reviewed-by: George Dunlap --- Cc: George Dunlap Cc: Anshul Makkar --- xen/common/sched_credit2.c | 50 ++++++++++++++++++++++++++------------------ xen/common/schedule.c | 1 + 2 files changed, 30 insertions(+), 21 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index ef8e0d8..90fe591 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -2581,50 +2581,35 @@ static void csched2_dump_pcpu(const struct scheduler *ops, int cpu) { struct csched2_private *prv = CSCHED2_PRIV(ops); - struct list_head *runq, *iter; struct csched2_vcpu *svc; unsigned long flags; spinlock_t *lock; - int loop; #define cpustr keyhandler_scratch /* * We need both locks: + * - we print current, so we need the runqueue lock for this + * cpu (the one of the runqueue this cpu is associated to); * - csched2_dump_vcpu() wants to access domains' weights, - * which are protected by the private scheduler lock; - * - we scan through the runqueue, so we need the proper runqueue - * lock (the one of the runqueue this cpu is associated to). + * which are protected by the private scheduler lock. */ read_lock_irqsave(&prv->lock, flags); lock = per_cpu(schedule_data, cpu).schedule_lock; spin_lock(lock); - runq = &RQD(ops, cpu)->runq; - cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask, cpu)); - printk(" sibling=%s, ", cpustr); + printk(" runq=%d, sibling=%s, ", c2r(ops, cpu), cpustr); cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu)); printk("core=%s\n", cpustr); - /* current VCPU */ + /* current VCPU (nothing to say if that's the idle vcpu) */ svc = CSCHED2_VCPU(curr_on_cpu(cpu)); - if ( svc ) + if ( svc && !is_idle_vcpu(svc->vcpu) ) { printk("\trun: "); csched2_dump_vcpu(prv, svc); } - loop = 0; - list_for_each( iter, runq ) - { - svc = __runq_elem(iter); - if ( svc ) - { - printk("\t%3d: ", ++loop); - csched2_dump_vcpu(prv, svc); - } - } - spin_unlock(lock); read_unlock_irqrestore(&prv->lock, flags); #undef cpustr @@ -2706,6 +2691,29 @@ csched2_dump(const struct scheduler *ops) } } + printk("Runqueue info:\n"); + for_each_cpu(i, &prv->active_queues) + { + struct csched2_runqueue_data *rqd = prv->rqd + i; + struct list_head *iter, *runq = &rqd->runq; + int loop = 0; + + /* We need the lock to scan the runqueue. */ + spin_lock(&rqd->lock); + printk("runqueue %d:\n", i); + list_for_each( iter, runq ) + { + struct csched2_vcpu *svc = __runq_elem(iter); + + if ( svc ) + { + printk("\t%3d: ", loop++); + csched2_dump_vcpu(prv, svc); + } + } + spin_unlock(&rqd->lock); + } + read_unlock_irqrestore(&prv->lock, flags); #undef cpustr } diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 5b444c4..e551e06 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -1832,6 +1832,7 @@ void schedule_dump(struct cpupool *c) cpus = &cpupool_free_cpus; } + printk("CPUs info:\n"); for_each_cpu (i, cpus) { printk("CPU[%02d] ", i);