From patchwork Thu Mar 28 12:06:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10874843 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AF3013B5 for ; Thu, 28 Mar 2019 12:09:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56419285D6 for ; Thu, 28 Mar 2019 12:09:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4A43C286A0; Thu, 28 Mar 2019 12:09:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A85C9285D6 for ; Thu, 28 Mar 2019 12:09:14 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9ToK-0008DC-9x; Thu, 28 Mar 2019 12:07:08 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9ToI-0008Cx-70 for xen-devel@lists.xenproject.org; Thu, 28 Mar 2019 12:07:06 +0000 X-Inumbo-ID: ffe5d05b-5151-11e9-bc90-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id ffe5d05b-5151-11e9-bc90-bc764e045a96; Thu, 28 Mar 2019 12:07:04 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 798EFC0B3; Thu, 28 Mar 2019 12:07:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 28 Mar 2019 13:06:57 +0100 Message-Id: <20190328120658.11083-6-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190328120658.11083-1-jgross@suse.com> References: <20190328120658.11083-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 5/6] xen/cpupool: simplify suspend/resume handling X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Instead of removing cpus temporarily from cpupools during suspend/resume only remove cpus finally which didn't come up when resuming. Signed-off-by: Juergen Gross Reviewed-by: George Dunlap Reviewed-by: Dario Faggioli --- V2: - add comment (George Dunlap) --- xen/common/cpupool.c | 131 ++++++++++++++++++--------------------------- xen/include/xen/sched-if.h | 1 - 2 files changed, 52 insertions(+), 80 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index e89bb67e71..31ac323e40 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -47,12 +47,6 @@ static struct cpupool *alloc_cpupool_struct(void) xfree(c); c = NULL; } - else if ( !zalloc_cpumask_var(&c->cpu_suspended) ) - { - free_cpumask_var(c->cpu_valid); - xfree(c); - c = NULL; - } return c; } @@ -60,10 +54,7 @@ static struct cpupool *alloc_cpupool_struct(void) static void free_cpupool_struct(struct cpupool *c) { if ( c ) - { - free_cpumask_var(c->cpu_suspended); free_cpumask_var(c->cpu_valid); - } xfree(c); } @@ -477,10 +468,6 @@ void cpupool_rm_domain(struct domain *d) /* * Called to add a cpu to a pool. CPUs being hot-plugged are added to pool0, * as they must have been in there when unplugged. - * - * If, on the other hand, we are adding CPUs because we are resuming (e.g., - * after ACPI S3) we put the cpu back in the pool where it was in prior when - * we suspended. */ static int cpupool_cpu_add(unsigned int cpu) { @@ -490,42 +477,15 @@ static int cpupool_cpu_add(unsigned int cpu) cpumask_clear_cpu(cpu, &cpupool_locked_cpus); cpumask_set_cpu(cpu, &cpupool_free_cpus); - if ( system_state == SYS_STATE_suspend || system_state == SYS_STATE_resume ) - { - struct cpupool **c; - - for_each_cpupool(c) - { - if ( cpumask_test_cpu(cpu, (*c)->cpu_suspended ) ) - { - ret = cpupool_assign_cpu_locked(*c, cpu); - if ( ret ) - goto out; - cpumask_clear_cpu(cpu, (*c)->cpu_suspended); - break; - } - } + /* + * If we are not resuming, we are hot-plugging cpu, and in which case + * we add it to pool0, as it certainly was there when hot-unplagged + * (or unplugging would have failed) and that is the default behavior + * anyway. + */ + per_cpu(cpupool, cpu) = NULL; + ret = cpupool_assign_cpu_locked(cpupool0, cpu); - /* - * Either cpu has been found as suspended in a pool, and added back - * there, or it stayed free (if it did not belong to any pool when - * suspending), and we don't want to do anything. - */ - ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus) || - cpumask_test_cpu(cpu, (*c)->cpu_valid)); - } - else - { - /* - * If we are not resuming, we are hot-plugging cpu, and in which case - * we add it to pool0, as it certainly was there when hot-unplagged - * (or unplugging would have failed) and that is the default behavior - * anyway. - */ - per_cpu(cpupool, cpu) = NULL; - ret = cpupool_assign_cpu_locked(cpupool0, cpu); - } - out: spin_unlock(&cpupool_lock); return ret; @@ -535,42 +495,14 @@ static int cpupool_cpu_add(unsigned int cpu) * Called to remove a CPU from a pool. The CPU is locked, to forbid removing * it from pool0. In fact, if we want to hot-unplug a CPU, it must belong to * pool0, or we fail. - * - * However, if we are suspending (e.g., to ACPI S3), we mark the CPU in such - * a way that it can be put back in its pool when resuming. */ static int cpupool_cpu_remove(unsigned int cpu) { int ret = -ENODEV; spin_lock(&cpupool_lock); - if ( system_state == SYS_STATE_suspend ) - { - struct cpupool **c; - - for_each_cpupool(c) - { - if ( cpumask_test_cpu(cpu, (*c)->cpu_valid ) ) - { - cpumask_set_cpu(cpu, (*c)->cpu_suspended); - cpumask_clear_cpu(cpu, (*c)->cpu_valid); - break; - } - } - /* - * Either we found cpu in a pool, or it must be free (if it has been - * hot-unplagged, then we must have found it in pool0). It is, of - * course, fine to suspend or shutdown with CPUs not assigned to a - * pool, and (in case of suspend) they will stay free when resuming. - */ - ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus) || - cpumask_test_cpu(cpu, (*c)->cpu_suspended)); - ASSERT(cpumask_test_cpu(cpu, &cpu_online_map) || - cpumask_test_cpu(cpu, cpupool0->cpu_suspended)); - ret = 0; - } - else if ( cpumask_test_cpu(cpu, cpupool0->cpu_valid) ) + if ( cpumask_test_cpu(cpu, cpupool0->cpu_valid) ) { /* * If we are not suspending, we are hot-unplugging cpu, and that is @@ -587,6 +519,41 @@ static int cpupool_cpu_remove(unsigned int cpu) return ret; } +/* + * Called during resume for all cpus which didn't come up again. The cpu must + * be removed from the cpupool it is assigned to. In case a cpupool will be + * left without cpu we move all domains of that cpupool to cpupool0. + */ +static void cpupool_cpu_remove_forced(unsigned int cpu) +{ + struct cpupool **c; + struct domain *d; + + spin_lock(&cpupool_lock); + + if ( cpumask_test_cpu(cpu, &cpupool_free_cpus) ) + cpumask_clear_cpu(cpu, &cpupool_free_cpus); + else + { + for_each_cpupool(c) + { + if ( cpumask_test_cpu(cpu, (*c)->cpu_valid) ) + { + cpumask_clear_cpu(cpu, (*c)->cpu_valid); + if ( cpumask_weight((*c)->cpu_valid) == 0 ) + { + if ( *c == cpupool0 ) + panic("No cpu left in cpupool0\n"); + for_each_domain_in_cpupool(d, *c) + cpupool_move_domain_locked(d, cpupool0); + } + } + } + } + + spin_unlock(&cpupool_lock); +} + /* * do cpupool related sysctl operations */ @@ -774,10 +741,16 @@ static int cpu_callback( { case CPU_DOWN_FAILED: case CPU_ONLINE: - rc = cpupool_cpu_add(cpu); + if ( system_state <= SYS_STATE_active ) + rc = cpupool_cpu_add(cpu); break; case CPU_DOWN_PREPARE: - rc = cpupool_cpu_remove(cpu); + /* Suspend/Resume don't change assignments of cpus to cpupools. */ + if ( system_state <= SYS_STATE_active ) + rc = cpupool_cpu_remove(cpu); + break; + case CPU_RESUME_FAILED: + cpupool_cpu_remove_forced(cpu); break; default: break; diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 9596eae1e2..92bc7a0365 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -214,7 +214,6 @@ struct cpupool { int cpupool_id; cpumask_var_t cpu_valid; /* all cpus assigned to pool */ - cpumask_var_t cpu_suspended; /* cpus in S3 that should be in this pool */ struct cpupool *next; unsigned int n_dom; struct scheduler *sched;