From patchwork Thu Feb 9 13:58:24 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9564627 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DD135601C3 for ; Thu, 9 Feb 2017 14:01:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C85FA28499 for ; Thu, 9 Feb 2017 14:01:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BD4D7284D1; Thu, 9 Feb 2017 14:01:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EDDB828499 for ; Thu, 9 Feb 2017 14:01:31 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cbpF1-0003rT-Da; Thu, 09 Feb 2017 13:58:31 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cbpF0-0003rN-8u for xen-devel@lists.xenproject.org; Thu, 09 Feb 2017 13:58:30 +0000 Received: from [85.158.139.211] by server-1.bemta-5.messagelabs.com id C2/2C-01971-5857C985; Thu, 09 Feb 2017 13:58:29 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrEIsWRWlGSWpSXmKPExsXiVRvkottUOif CYHKDscX3LZOZHBg9Dn+4whLAGMWamZeUX5HAmvH/6EnWgg2GFRcb97A2MM5R72Lk4hASmM4o 0bBuBguIwyKwhlWi+/pMti5GTg4JgUusEm2tKhB2jMS1E2cYIewaidvPusBsIQEViZvbVzFBT PrFKLH7xjFWkISwgJ7EkaM/2CHsEIkl62YygdhsAgYSb3bsBasREVCSuLdqMlicWaBK4vjJpW BDWQRUJSZuOAJ2BK+Al8TxW4/A4pwC3hLv761lh1jsJbHs71IWEFtUQE5i5eUWVoh6QYmTM58 AxTmAZmpKrN+lDzFeXmL72znMExhFZiGpmoVQNQtJ1QJG5lWMGsWpRWWpRbpGZnpJRZnpGSW5 iZk5uoYGpnq5qcXFiempOYlJxXrJ+bmbGIHhX8/AwLiD8fZkv0OMkhxMSqK8sgVzIoT4kvJTK jMSizPii0pzUosPMcpwcChJ8AqUAOUEi1LTUyvSMnOAkQiTluDgURLhjQRJ8xYXJOYWZ6ZDpE 4x6nLs2nX5JZMQS15+XqqUOG8VSJEASFFGaR7cCFhSuMQoKyXMy8jAwCDEU5BalJtZgir/ilG cg1FJmNcIZApPZl4J3KZXQEcwAR1x/fQskCNKEhFSUg2MqS+zRHeUveCK/nQg39VIYG1QSZLo 2+OShz1W5YRvyWZzTBTVDXxz4vJN74y7M5mP7zdQXSHpXsa5xmJd49yzWS2Xiw9P/fpHwZe/r DJXir3iwNTr3Rkxmlfved0/JSO6iedUaIdFUWpsa1z1v7ypGxnVE9cFNu+8ON2y1u+Qe5xKq0 Rfm4oSS3FGoqEWc1FxIgCvBWomBQMAAA== X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-10.tower-206.messagelabs.com!1486648706!66975176!1 X-Originating-IP: [74.125.82.68] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 62142 invoked from network); 9 Feb 2017 13:58:26 -0000 Received: from mail-wm0-f68.google.com (HELO mail-wm0-f68.google.com) (74.125.82.68) by server-10.tower-206.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 9 Feb 2017 13:58:26 -0000 Received: by mail-wm0-f68.google.com with SMTP id u63so3006227wmu.2 for ; Thu, 09 Feb 2017 05:58:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=AOBaZIPV1/G414E5qovgpaHc/ddl1mw0qE7fUFwf8pw=; b=ncquTYHy66A/7WDQRmW440D20fhj9rwzfgh7GpTx1rVQ17SPS8CRC4Q2RRtYzYQ37k n/SQscNPimw7orlY4GuDmADFRwoj2bSY6XvI9xX+5srnvthj9ZSOnhGWozrLEto1KxcE ovuVrJo5SEGjX8kb0PBDJOeTsMGtu8Ue5nsgQhXM5e0bUzDi5d14mTZ247/TSoYrsNQY TED+EbCibq/efeAEGJlEe/GGcvPh1qYd7HjV4QJdqOxDhDM/6M4+IXPW5ap9/ZUcAaYa ooxYhv3Pz50/mkRz3BkEppkrxiF2Y8F3X0HQFxvwP2ZqpBtPvtVvTcBWGT/Q7EkWJYgK CoJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=AOBaZIPV1/G414E5qovgpaHc/ddl1mw0qE7fUFwf8pw=; b=gwXPqI0j9dv0OS5MREjnp6UaJUf2p58Li3fLrJr5Ksc0QDDOZuoBxA5/lfCutblQ/f pPV03hAvkE/h/gkSWLztvOmL1ys26hF1Bifd5Bw6wsGvp2wG9Ms83RSQ6P41ZCzKT9Zb RXucMMCMjGHq7nSkRR9z03ROM3qB3cbZ8nCGTmfGTDwUKVDIjNEy3bUFaQMOgAKrOxXj uq/jy8IHjqFWCp3zk2MY8iSnJb0gSvNk6EqDdHa2hnT1/OiD2A0ZuzESoZV3J93tX6jk uhSo0XFRpZ75CQE18dzk8rw98M7CV24b7/UD1QN9jsZs2cZfyjjzPUq0Wee97VAzgtU+ 73Mw== X-Gm-Message-State: AMke39mcD5nl89TGU6BiyTsy6Wy5WSt/7EQ14FRhDiTVsJLzrPHfWk6B3zxnLPCLJfa4hw== X-Received: by 10.28.211.205 with SMTP id k196mr21231569wmg.124.1486648706340; Thu, 09 Feb 2017 05:58:26 -0800 (PST) Received: from Solace.fritz.box ([80.66.223.139]) by smtp.gmail.com with ESMTPSA id 61sm18677017wrs.29.2017.02.09.05.58.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Feb 2017 05:58:25 -0800 (PST) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Thu, 09 Feb 2017 14:58:24 +0100 Message-ID: <148664870439.595.10870262607126261334.stgit@Solace.fritz.box> In-Reply-To: <148664844741.595.10506268024432565895.stgit@Solace.fritz.box> References: <148664844741.595.10506268024432565895.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Anshul Makkar , Meng Xu Subject: [Xen-devel] [PATCH v2 01/10] xen: sched: harmonize debug dump output among schedulers. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Information we currently print for idle pCPUs is rather useless. Credit2 already stopped showing that, do the same for Credit and RTDS. Also, define a new CPU status dump hook, which is not defined by those schedulers which already dump such info in other ways (e.g., Credit2, which does that while dumping runqueue information). This also means that, still in Credit2, we can keep the runqueue and pCPU info closer together. Signed-off-by: Dario Faggioli Acked-by: Meng Xu Reviewed-by: George Dunlap --- Cc: George Dunlap Cc: Anshul Makkar --- This is basically the rebase of "xen: sched: improve debug dump output.", on top of "xen: credit2: improve debug dump output." (i.e., commit 3af86727b8204). Sorry again, George, for the mess... I was sure I hadn't sent the first one out yet, when I sended it out what turned out to be the second time (and, even worse, slightly reworked! :-( ). I'm keeping Meng's ack, as I did not touch the RTDS part, wrt the patch he sent it against. --- xen/common/sched_credit.c | 6 +++--- xen/common/sched_credit2.c | 34 +++++++++++----------------------- xen/common/sched_rt.c | 9 ++++++++- xen/common/schedule.c | 8 ++++---- 4 files changed, 26 insertions(+), 31 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index ad20819..7c0ff47 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1988,13 +1988,13 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) runq = &spc->runq; cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask, cpu)); - printk(" sort=%d, sibling=%s, ", spc->runq_sort_last, cpustr); + printk("CPU[%02d] sort=%d, sibling=%s, ", cpu, spc->runq_sort_last, cpustr); cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu)); printk("core=%s\n", cpustr); - /* current VCPU */ + /* current VCPU (nothing to say if that's the idle vcpu). */ svc = CSCHED_VCPU(curr_on_cpu(cpu)); - if ( svc ) + if ( svc && !is_idle_vcpu(svc->vcpu) ) { printk("\trun: "); csched_dump_vcpu(svc); diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 84ee015..741d372 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -2627,28 +2627,15 @@ csched2_dump_vcpu(struct csched2_private *prv, struct csched2_vcpu *svc) printk("\n"); } -static void -csched2_dump_pcpu(const struct scheduler *ops, int cpu) +static inline void +dump_pcpu(const struct scheduler *ops, int cpu) { struct csched2_private *prv = CSCHED2_PRIV(ops); struct csched2_vcpu *svc; - unsigned long flags; - spinlock_t *lock; #define cpustr keyhandler_scratch - /* - * We need both locks: - * - we print current, so we need the runqueue lock for this - * cpu (the one of the runqueue this cpu is associated to); - * - csched2_dump_vcpu() wants to access domains' weights, - * which are protected by the private scheduler lock. - */ - read_lock_irqsave(&prv->lock, flags); - lock = per_cpu(schedule_data, cpu).schedule_lock; - spin_lock(lock); - cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask, cpu)); - printk(" runq=%d, sibling=%s, ", c2r(ops, cpu), cpustr); + printk("CPU[%02d] runq=%d, sibling=%s, ", cpu, c2r(ops, cpu), cpustr); cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu)); printk("core=%s\n", cpustr); @@ -2659,9 +2646,6 @@ csched2_dump_pcpu(const struct scheduler *ops, int cpu) printk("\trun: "); csched2_dump_vcpu(prv, svc); } - - spin_unlock(lock); - read_unlock_irqrestore(&prv->lock, flags); #undef cpustr } @@ -2671,7 +2655,7 @@ csched2_dump(const struct scheduler *ops) struct list_head *iter_sdom; struct csched2_private *prv = CSCHED2_PRIV(ops); unsigned long flags; - int i, loop; + unsigned int i, j, loop; #define cpustr keyhandler_scratch /* @@ -2741,7 +2725,6 @@ csched2_dump(const struct scheduler *ops) } } - printk("Runqueue info:\n"); for_each_cpu(i, &prv->active_queues) { struct csched2_runqueue_data *rqd = prv->rqd + i; @@ -2750,7 +2733,13 @@ csched2_dump(const struct scheduler *ops) /* We need the lock to scan the runqueue. */ spin_lock(&rqd->lock); - printk("runqueue %d:\n", i); + + printk("Runqueue %d:\n", i); + + for_each_cpu(j, &rqd->active) + dump_pcpu(ops, j); + + printk("RUNQ:\n"); list_for_each( iter, runq ) { struct csched2_vcpu *svc = __runq_elem(iter); @@ -3108,7 +3097,6 @@ static const struct scheduler sched_credit2_def = { .do_schedule = csched2_schedule, .context_saved = csched2_context_saved, - .dump_cpu_state = csched2_dump_pcpu, .dump_settings = csched2_dump, .init = csched2_init, .deinit = csched2_deinit, diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 24b4b22..f2d979c 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -320,10 +320,17 @@ static void rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv = rt_priv(ops); + struct rt_vcpu *svc; unsigned long flags; spin_lock_irqsave(&prv->lock, flags); - rt_dump_vcpu(ops, rt_vcpu(curr_on_cpu(cpu))); + printk("CPU[%02d]\n", cpu); + /* current VCPU (nothing to say if that's the idle vcpu). */ + svc = rt_vcpu(curr_on_cpu(cpu)); + if ( svc && !is_idle_vcpu(svc->vcpu) ) + { + rt_dump_vcpu(ops, svc); + } spin_unlock_irqrestore(&prv->lock, flags); } diff --git a/xen/common/schedule.c b/xen/common/schedule.c index ed77990..e4320f3 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -1844,11 +1844,11 @@ void schedule_dump(struct cpupool *c) cpus = &cpupool_free_cpus; } - printk("CPUs info:\n"); - for_each_cpu (i, cpus) + if ( sched->dump_cpu_state != NULL ) { - printk("CPU[%02d] ", i); - SCHED_OP(sched, dump_cpu_state, i); + printk("CPUs info:\n"); + for_each_cpu (i, cpus) + SCHED_OP(sched, dump_cpu_state, i); } }