From patchwork Fri Mar 29 15:08:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10877279 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF6621575 for ; Fri, 29 Mar 2019 15:11:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB19F29531 for ; Fri, 29 Mar 2019 15:11:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A95B029827; Fri, 29 Mar 2019 15:11:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7A72629531 for ; Fri, 29 Mar 2019 15:11:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8b-0003Vg-6r; Fri, 29 Mar 2019 15:09:45 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8Z-0003UZ-4W for xen-devel@lists.xenproject.org; Fri, 29 Mar 2019 15:09:43 +0000 X-Inumbo-ID: ad8c69b7-5234-11e9-bc90-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id ad8c69b7-5234-11e9-bc90-bc764e045a96; Fri, 29 Mar 2019 15:09:42 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 2D1FCB001; Fri, 29 Mar 2019 15:09:40 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 29 Mar 2019 16:08:52 +0100 Message-Id: <20190329150934.17694-8-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190329150934.17694-1-jgross@suse.com> References: <20190329150934.17694-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC 07/49] xen/sched: fix credit2 smt idle handling X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Credit2's smt_idle_mask_set() and smt_idle_mask_clear() are used to identify idle cores where vcpus can be moved to. A core is thought to be idle when all siblings are known to have the idle vcpu running on them. Unfortunately the information of a vcpu running on a cpu is per runqueue. So in case not all siblings are in the same runqueue a core will never be regarded to be idle, as the sibling not in the runqueue is never known to run the idle vcpu. Use a credit2 specific cpumask of siblings with only those cpus being marked which are in the same runqueue as the cpu in question. Signed-off-by: Juergen Gross --- V2: - use credit2 per-cpu specific sibling mask --- xen/common/sched_credit2.c | 25 +++++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 543dc3664d..6958b265fc 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -504,6 +504,7 @@ struct csched2_private { * Physical CPU */ struct csched2_pcpu { + cpumask_t sibling_mask; /* Siblings in the same runqueue */ int runq_id; }; @@ -656,7 +657,7 @@ static inline void smt_idle_mask_set(unsigned int cpu, const cpumask_t *idlers, cpumask_t *mask) { - const cpumask_t *cpu_siblings = per_cpu(cpu_sibling_mask, cpu); + const cpumask_t *cpu_siblings = &csched2_pcpu(cpu)->sibling_mask; if ( cpumask_subset(cpu_siblings, idlers) ) cpumask_or(mask, mask, cpu_siblings); @@ -668,10 +669,10 @@ void smt_idle_mask_set(unsigned int cpu, const cpumask_t *idlers, static inline void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask) { - const cpumask_t *cpu_siblings = per_cpu(cpu_sibling_mask, cpu); + const cpumask_t *cpu_siblings = &csched2_pcpu(cpu)->sibling_mask; if ( cpumask_subset(cpu_siblings, mask) ) - cpumask_andnot(mask, mask, per_cpu(cpu_sibling_mask, cpu)); + cpumask_andnot(mask, mask, cpu_siblings); } /* @@ -3793,6 +3794,7 @@ init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc, unsigned int cpu) { struct csched2_runqueue_data *rqd; + unsigned int rcpu; ASSERT(rw_is_write_locked(&prv->lock)); ASSERT(!cpumask_test_cpu(cpu, &prv->initialized)); @@ -3810,12 +3812,23 @@ init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc, printk(XENLOG_INFO " First cpu on runqueue, activating\n"); activate_runqueue(prv, spc->runq_id); } - + __cpumask_set_cpu(cpu, &rqd->idle); __cpumask_set_cpu(cpu, &rqd->active); __cpumask_set_cpu(cpu, &prv->initialized); __cpumask_set_cpu(cpu, &rqd->smt_idle); + /* On the boot cpu we are called before cpu_sibling_mask has been set up. */ + if ( cpu == 0 && system_state < SYS_STATE_active ) + __cpumask_set_cpu(cpu, &csched2_pcpu(cpu)->sibling_mask); + else + for_each_cpu ( rcpu, per_cpu(cpu_sibling_mask, cpu) ) + if ( cpumask_test_cpu(rcpu, &rqd->active) ) + { + __cpumask_set_cpu(cpu, &csched2_pcpu(rcpu)->sibling_mask); + __cpumask_set_cpu(rcpu, &csched2_pcpu(cpu)->sibling_mask); + } + if ( cpumask_weight(&rqd->active) == 1 ) rqd->pick_bias = cpu; @@ -3897,6 +3910,7 @@ csched2_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu) struct csched2_private *prv = csched2_priv(ops); struct csched2_runqueue_data *rqd; struct csched2_pcpu *spc = pcpu; + unsigned int rcpu; write_lock_irqsave(&prv->lock, flags); @@ -3923,6 +3937,9 @@ csched2_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu) printk(XENLOG_INFO "Removing cpu %d from runqueue %d\n", cpu, spc->runq_id); + for_each_cpu ( rcpu, &rqd->active ) + __cpumask_clear_cpu(cpu, &csched2_pcpu(rcpu)->sibling_mask); + __cpumask_clear_cpu(cpu, &rqd->idle); __cpumask_clear_cpu(cpu, &rqd->smt_idle); __cpumask_clear_cpu(cpu, &rqd->active);