From patchwork Fri Mar 22 09:04:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10865289 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2711F6C2 for ; Fri, 22 Mar 2019 09:06:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DEF82A613 for ; Fri, 22 Mar 2019 09:06:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 016DC2A615; Fri, 22 Mar 2019 09:06:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7DDC72A613 for ; Fri, 22 Mar 2019 09:06:38 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h7G6R-0002GU-Ib; Fri, 22 Mar 2019 09:04:39 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h7G6P-0002GP-Nx for xen-devel@lists.xenproject.org; Fri, 22 Mar 2019 09:04:37 +0000 X-Inumbo-ID: 83ccae01-4c81-11e9-bc90-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 83ccae01-4c81-11e9-bc90-bc764e045a96; Fri, 22 Mar 2019 09:04:36 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id E861EAE56; Fri, 22 Mar 2019 09:04:34 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 22 Mar 2019 10:04:31 +0100 Message-Id: <20190322090431.28112-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Subject: [Xen-devel] [PATCH] xen/sched: fix credit2 smt idle handling X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Credit2's smt_idle_mask_set() and smt_idle_mask_clear() are used to identify idle cores where vcpus can be moved to. A core is thought to be idle when all siblings are known to have the idle vcpu running on them. Unfortunately the information of a vcpu running on a cpu is per runqueue. So in case not all siblings are in the same runqueue a core will never be regarded to be idle, as the sibling not in the runqueue is never known to run the idle vcpu. This problem can be solved by and-ing the core's sibling cpumask with the runqueue's active mask before doing the idle test. In order for not having to allocate another cpumask the interfaces of smt_idle_mask_set() and smt_idle_mask_clear() are modified to not take a mask as input, but the runqueue data pointer, as those functions are always called with the same masks as parameters. Signed-off-by: Juergen Gross --- xen/common/sched_credit2.c | 35 ++++++++++++++++------------------- 1 file changed, 16 insertions(+), 19 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 543dc3664d..ab50e7ad23 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -638,7 +638,8 @@ static inline bool has_cap(const struct csched2_vcpu *svc) /* * If all the siblings of cpu (including cpu itself) are both idle and - * untickled, set all their bits in mask. + * untickled, set all their bits in mask. Note that only siblings handled + * by the rqd can be taken into account. * * NB that rqd->smt_idle is different than rqd->idle. rqd->idle * records pcpus that at are merely idle (i.e., at the moment do not @@ -653,25 +654,23 @@ static inline bool has_cap(const struct csched2_vcpu *svc) * changes. */ static inline -void smt_idle_mask_set(unsigned int cpu, const cpumask_t *idlers, - cpumask_t *mask) +void smt_idle_mask_set(unsigned int cpu, struct csched2_runqueue_data *rqd) { - const cpumask_t *cpu_siblings = per_cpu(cpu_sibling_mask, cpu); - - if ( cpumask_subset(cpu_siblings, idlers) ) - cpumask_or(mask, mask, cpu_siblings); + cpumask_and(cpumask_scratch, per_cpu(cpu_sibling_mask, cpu), &rqd->active); + if ( cpumask_subset(cpumask_scratch, &rqd->idle) && + !cpumask_intersects(cpumask_scratch, &rqd->tickled) ) + cpumask_or(&rqd->smt_idle, &rqd->smt_idle, cpumask_scratch); } /* * Clear the bits of all the siblings of cpu from mask (if necessary). */ static inline -void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask) +void smt_idle_mask_clear(unsigned int cpu, struct csched2_runqueue_data *rqd) { - const cpumask_t *cpu_siblings = per_cpu(cpu_sibling_mask, cpu); - - if ( cpumask_subset(cpu_siblings, mask) ) - cpumask_andnot(mask, mask, per_cpu(cpu_sibling_mask, cpu)); + cpumask_and(cpumask_scratch, per_cpu(cpu_sibling_mask, cpu), &rqd->active); + if ( cpumask_subset(cpumask_scratch, &rqd->smt_idle) ) + cpumask_andnot(&rqd->smt_idle, &rqd->smt_idle, cpumask_scratch); } /* @@ -1323,7 +1322,7 @@ static inline void tickle_cpu(unsigned int cpu, struct csched2_runqueue_data *rqd) { __cpumask_set_cpu(cpu, &rqd->tickled); - smt_idle_mask_clear(cpu, &rqd->smt_idle); + smt_idle_mask_clear(cpu, rqd); cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); } @@ -3468,8 +3467,7 @@ csched2_schedule( if ( tickled ) { __cpumask_clear_cpu(cpu, &rqd->tickled); - cpumask_andnot(cpumask_scratch, &rqd->idle, &rqd->tickled); - smt_idle_mask_set(cpu, cpumask_scratch, &rqd->smt_idle); + smt_idle_mask_set(cpu, rqd); } if ( unlikely(tb_init_done) ) @@ -3553,7 +3551,7 @@ csched2_schedule( if ( cpumask_test_cpu(cpu, &rqd->idle) ) { __cpumask_clear_cpu(cpu, &rqd->idle); - smt_idle_mask_clear(cpu, &rqd->smt_idle); + smt_idle_mask_clear(cpu, rqd); } /* @@ -3599,14 +3597,13 @@ csched2_schedule( if ( cpumask_test_cpu(cpu, &rqd->idle) ) { __cpumask_clear_cpu(cpu, &rqd->idle); - smt_idle_mask_clear(cpu, &rqd->smt_idle); + smt_idle_mask_clear(cpu, rqd); } } else if ( !cpumask_test_cpu(cpu, &rqd->idle) ) { __cpumask_set_cpu(cpu, &rqd->idle); - cpumask_andnot(cpumask_scratch, &rqd->idle, &rqd->tickled); - smt_idle_mask_set(cpu, cpumask_scratch, &rqd->smt_idle); + smt_idle_mask_set(cpu, rqd); } /* Make sure avgload gets updated periodically even * if there's no activity */