From patchwork Mon Mar 18 13:11:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10857589 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7DA561390 for ; Mon, 18 Mar 2019 13:13:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 63CA7290C6 for ; Mon, 18 Mar 2019 13:13:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 57F5A2917C; Mon, 18 Mar 2019 13:13:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 056372910F for ; Mon, 18 Mar 2019 13:13:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3e-0006Ze-7Z; Mon, 18 Mar 2019 13:12:02 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3c-0006ZX-Qa for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 13:12:00 +0000 X-Inumbo-ID: 692b10f2-497f-11e9-a5ae-23804f4f0836 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 692b10f2-497f-11e9-a5ae-23804f4f0836; Mon, 18 Mar 2019 13:11:59 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 07807AF52; Mon, 18 Mar 2019 13:11:58 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 18 Mar 2019 14:11:50 +0100 Message-Id: <20190318131155.29450-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190318131155.29450-1-jgross@suse.com> References: <20190318131155.29450-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 1/6] xen/sched: call cpu_disable_scheduler() via cpu notifier X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Wei Liu , George Dunlap , Andrew Cooper , Dario Faggioli , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP cpu_disable_scheduler() is being called from __cpu_disable() today. There is no need to call it on the cpu just being disabled, so use the CPU_DEAD case of the cpu notifier chain. Signed-off-by: Juergen Gross Acked-by: George Dunlap --- xen/arch/x86/smpboot.c | 3 --- xen/common/schedule.c | 12 +++++------- 2 files changed, 5 insertions(+), 10 deletions(-) diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index 7d1226d7bc..b7a0a4a419 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -1221,9 +1221,6 @@ void __cpu_disable(void) cpumask_clear_cpu(cpu, &cpu_online_map); fixup_irqs(&cpu_online_map, 1); fixup_eoi(); - - if ( cpu_disable_scheduler(cpu) ) - BUG(); } void __cpu_die(unsigned int cpu) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 60755a631e..665747f247 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -773,8 +773,9 @@ void restore_vcpu_affinity(struct domain *d) } /* - * This function is used by cpu_hotplug code from stop_machine context + * This function is used by cpu_hotplug code via cpu notifier chain * and from cpupools to switch schedulers on a cpu. + * Caller must get domlist_read_lock. */ int cpu_disable_scheduler(unsigned int cpu) { @@ -789,12 +790,6 @@ int cpu_disable_scheduler(unsigned int cpu) if ( c == NULL ) return ret; - /* - * We'd need the domain RCU lock, but: - * - when we are called from cpupool code, it's acquired there already; - * - when we are called for CPU teardown, we're in stop-machine context, - * so that's not be a problem. - */ for_each_domain_in_cpupool ( d, c ) { for_each_vcpu ( d, v ) @@ -1738,6 +1733,9 @@ static int cpu_schedule_callback( rc = cpu_schedule_up(cpu); break; case CPU_DEAD: + rcu_read_lock(&domlist_read_lock); + cpu_disable_scheduler(cpu); + rcu_read_unlock(&domlist_read_lock); SCHED_OP(sched, deinit_pdata, sd->sched_priv, cpu); /* Fallthrough */ case CPU_UP_CANCELED: From patchwork Mon Mar 18 13:11:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10857593 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E79871390 for ; Mon, 18 Mar 2019 13:14:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C7B1E2916C for ; Mon, 18 Mar 2019 13:14:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C5C792936D; Mon, 18 Mar 2019 13:14:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 608E72916C for ; Mon, 18 Mar 2019 13:14:01 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3h-0006aj-Fv; Mon, 18 Mar 2019 13:12:05 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3f-0006Zx-I7 for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 13:12:03 +0000 X-Inumbo-ID: 6af2878f-497f-11e9-bc90-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 6af2878f-497f-11e9-bc90-bc764e045a96; Mon, 18 Mar 2019 13:12:02 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 2C02BAF6A; Mon, 18 Mar 2019 13:12:01 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 18 Mar 2019 14:11:51 +0100 Message-Id: <20190318131155.29450-3-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190318131155.29450-1-jgross@suse.com> References: <20190318131155.29450-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 2/6] xen: add helper for calling notifier_call_chain() to common/cpu.c X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add a helper cpu_notifier_call_chain() to call notifier_call_chain() for a cpu with a specified action, returning an errno value. This avoids coding the same pattern multiple times. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli Reviewed-by: George Dunlap --- xen/common/cpu.c | 50 +++++++++++++++++++++----------------------------- 1 file changed, 21 insertions(+), 29 deletions(-) diff --git a/xen/common/cpu.c b/xen/common/cpu.c index 836c62f97f..c436c0de7f 100644 --- a/xen/common/cpu.c +++ b/xen/common/cpu.c @@ -71,11 +71,18 @@ void __init register_cpu_notifier(struct notifier_block *nb) spin_unlock(&cpu_add_remove_lock); } +static int cpu_notifier_call_chain(unsigned int cpu, unsigned long action, + struct notifier_block **nb) +{ + void *hcpu = (void *)(long)cpu; + int notifier_rc = notifier_call_chain(&cpu_chain, action, hcpu, nb); + + return (notifier_rc == NOTIFY_DONE) ? 0 : notifier_to_errno(notifier_rc); +} + static void _take_cpu_down(void *unused) { - void *hcpu = (void *)(long)smp_processor_id(); - int notifier_rc = notifier_call_chain(&cpu_chain, CPU_DYING, hcpu, NULL); - BUG_ON(notifier_rc != NOTIFY_DONE); + BUG_ON(cpu_notifier_call_chain(smp_processor_id(), CPU_DYING, NULL)); __cpu_disable(); } @@ -87,8 +94,7 @@ static int take_cpu_down(void *arg) int cpu_down(unsigned int cpu) { - int err, notifier_rc; - void *hcpu = (void *)(long)cpu; + int err; struct notifier_block *nb = NULL; if ( !cpu_hotplug_begin() ) @@ -100,12 +106,9 @@ int cpu_down(unsigned int cpu) return -EINVAL; } - notifier_rc = notifier_call_chain(&cpu_chain, CPU_DOWN_PREPARE, hcpu, &nb); - if ( notifier_rc != NOTIFY_DONE ) - { - err = notifier_to_errno(notifier_rc); + err = cpu_notifier_call_chain(cpu, CPU_DOWN_PREPARE, &nb); + if ( err ) goto fail; - } if ( unlikely(system_state < SYS_STATE_active) ) on_selected_cpus(cpumask_of(cpu), _take_cpu_down, NULL, true); @@ -115,24 +118,21 @@ int cpu_down(unsigned int cpu) __cpu_die(cpu); BUG_ON(cpu_online(cpu)); - notifier_rc = notifier_call_chain(&cpu_chain, CPU_DEAD, hcpu, NULL); - BUG_ON(notifier_rc != NOTIFY_DONE); + BUG_ON(cpu_notifier_call_chain(cpu, CPU_DEAD, NULL)); send_global_virq(VIRQ_PCPU_STATE); cpu_hotplug_done(); return 0; fail: - notifier_rc = notifier_call_chain(&cpu_chain, CPU_DOWN_FAILED, hcpu, &nb); - BUG_ON(notifier_rc != NOTIFY_DONE); + BUG_ON(cpu_notifier_call_chain(cpu, CPU_DOWN_FAILED, &nb)); cpu_hotplug_done(); return err; } int cpu_up(unsigned int cpu) { - int notifier_rc, err = 0; - void *hcpu = (void *)(long)cpu; + int err; struct notifier_block *nb = NULL; if ( !cpu_hotplug_begin() ) @@ -144,19 +144,15 @@ int cpu_up(unsigned int cpu) return -EINVAL; } - notifier_rc = notifier_call_chain(&cpu_chain, CPU_UP_PREPARE, hcpu, &nb); - if ( notifier_rc != NOTIFY_DONE ) - { - err = notifier_to_errno(notifier_rc); + err = cpu_notifier_call_chain(cpu, CPU_UP_PREPARE, &nb); + if ( err ) goto fail; - } err = __cpu_up(cpu); if ( err < 0 ) goto fail; - notifier_rc = notifier_call_chain(&cpu_chain, CPU_ONLINE, hcpu, NULL); - BUG_ON(notifier_rc != NOTIFY_DONE); + BUG_ON(cpu_notifier_call_chain(cpu, CPU_ONLINE, NULL)); send_global_virq(VIRQ_PCPU_STATE); @@ -164,18 +160,14 @@ int cpu_up(unsigned int cpu) return 0; fail: - notifier_rc = notifier_call_chain(&cpu_chain, CPU_UP_CANCELED, hcpu, &nb); - BUG_ON(notifier_rc != NOTIFY_DONE); + BUG_ON(cpu_notifier_call_chain(cpu, CPU_UP_CANCELED, &nb)); cpu_hotplug_done(); return err; } void notify_cpu_starting(unsigned int cpu) { - void *hcpu = (void *)(long)cpu; - int notifier_rc = notifier_call_chain( - &cpu_chain, CPU_STARTING, hcpu, NULL); - BUG_ON(notifier_rc != NOTIFY_DONE); + BUG_ON(cpu_notifier_call_chain(cpu, CPU_STARTING, NULL)); } static cpumask_t frozen_cpus; From patchwork Mon Mar 18 13:11:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10857595 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 068DD13B5 for ; Mon, 18 Mar 2019 13:14:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E1DE12919A for ; Mon, 18 Mar 2019 13:14:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E03E3291B4; Mon, 18 Mar 2019 13:14:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8673F292FE for ; Mon, 18 Mar 2019 13:14:01 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3k-0006d7-Dh; Mon, 18 Mar 2019 13:12:08 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3i-0006bU-Ij for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 13:12:06 +0000 X-Inumbo-ID: 6c44dd9a-497f-11e9-b24d-5b921b761244 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 6c44dd9a-497f-11e9-b24d-5b921b761244; Mon, 18 Mar 2019 13:12:04 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 795A1AF68; Mon, 18 Mar 2019 13:12:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 18 Mar 2019 14:11:52 +0100 Message-Id: <20190318131155.29450-4-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190318131155.29450-1-jgross@suse.com> References: <20190318131155.29450-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 3/6] xen: add new cpu notifier action CPU_RESUME_FAILED X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add a new cpu notifier action CPU_RESUME_FAILED which is called for all cpus which failed to come up at resume. The calls will be done after all other cpus are already up. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli Reviewed-by: George Dunlap --- xen/common/cpu.c | 5 +++++ xen/include/xen/cpu.h | 20 +++++++++++--------- 2 files changed, 16 insertions(+), 9 deletions(-) diff --git a/xen/common/cpu.c b/xen/common/cpu.c index c436c0de7f..f3cf9463b4 100644 --- a/xen/common/cpu.c +++ b/xen/common/cpu.c @@ -214,7 +214,12 @@ void enable_nonboot_cpus(void) printk("Error bringing CPU%d up: %d\n", cpu, error); BUG_ON(error == -EBUSY); } + else + __cpumask_clear_cpu(cpu, &frozen_cpus); } + for_each_cpu ( cpu, &frozen_cpus ) + BUG_ON(cpu_notifier_call_chain(cpu, CPU_RESUME_FAILED, NULL)); + cpumask_clear(&frozen_cpus); } diff --git a/xen/include/xen/cpu.h b/xen/include/xen/cpu.h index 2fe3ec05d8..2fc0cb1bb5 100644 --- a/xen/include/xen/cpu.h +++ b/xen/include/xen/cpu.h @@ -32,23 +32,25 @@ void register_cpu_notifier(struct notifier_block *nb); * (a) A CPU is going down; or (b) CPU_UP_CANCELED */ /* CPU_UP_PREPARE: Preparing to bring CPU online. */ -#define CPU_UP_PREPARE (0x0001 | NOTIFY_FORWARD) +#define CPU_UP_PREPARE (0x0001 | NOTIFY_FORWARD) /* CPU_UP_CANCELED: CPU is no longer being brought online. */ -#define CPU_UP_CANCELED (0x0002 | NOTIFY_REVERSE) +#define CPU_UP_CANCELED (0x0002 | NOTIFY_REVERSE) /* CPU_STARTING: CPU nearly online. Runs on new CPU, irqs still disabled. */ -#define CPU_STARTING (0x0003 | NOTIFY_FORWARD) +#define CPU_STARTING (0x0003 | NOTIFY_FORWARD) /* CPU_ONLINE: CPU is up. */ -#define CPU_ONLINE (0x0004 | NOTIFY_FORWARD) +#define CPU_ONLINE (0x0004 | NOTIFY_FORWARD) /* CPU_DOWN_PREPARE: CPU is going down. */ -#define CPU_DOWN_PREPARE (0x0005 | NOTIFY_REVERSE) +#define CPU_DOWN_PREPARE (0x0005 | NOTIFY_REVERSE) /* CPU_DOWN_FAILED: CPU is no longer going down. */ -#define CPU_DOWN_FAILED (0x0006 | NOTIFY_FORWARD) +#define CPU_DOWN_FAILED (0x0006 | NOTIFY_FORWARD) /* CPU_DYING: CPU is nearly dead (in stop_machine context). */ -#define CPU_DYING (0x0007 | NOTIFY_REVERSE) +#define CPU_DYING (0x0007 | NOTIFY_REVERSE) /* CPU_DEAD: CPU is dead. */ -#define CPU_DEAD (0x0008 | NOTIFY_REVERSE) +#define CPU_DEAD (0x0008 | NOTIFY_REVERSE) /* CPU_REMOVE: CPU was removed. */ -#define CPU_REMOVE (0x0009 | NOTIFY_REVERSE) +#define CPU_REMOVE (0x0009 | NOTIFY_REVERSE) +/* CPU_RESUME_FAILED: CPU failed to come up in resume, all other CPUs up. */ +#define CPU_RESUME_FAILED (0x000a | NOTIFY_REVERSE) /* Perform CPU hotplug. May return -EAGAIN. */ int cpu_down(unsigned int cpu); From patchwork Mon Mar 18 13:11:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10857585 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A1501390 for ; Mon, 18 Mar 2019 13:13:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7194A290AA for ; Mon, 18 Mar 2019 13:13:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6FB3929151; Mon, 18 Mar 2019 13:13:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 00C6529114 for ; Mon, 18 Mar 2019 13:13:45 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3g-0006aP-5R; Mon, 18 Mar 2019 13:12:04 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3f-0006Zs-3E for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 13:12:03 +0000 X-Inumbo-ID: 6aaf42f4-497f-11e9-863f-ff3271105ef5 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 6aaf42f4-497f-11e9-863f-ff3271105ef5; Mon, 18 Mar 2019 13:12:01 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D4A5BAF67; Mon, 18 Mar 2019 13:12:00 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 18 Mar 2019 14:11:53 +0100 Message-Id: <20190318131155.29450-5-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190318131155.29450-1-jgross@suse.com> References: <20190318131155.29450-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 4/6] xen: don't free percpu areas during suspend X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Andrew Cooper , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Instead of freeing percpu areas during suspend and allocating them again when resuming keep them. Only free an area in case a cpu didn't come up again when resuming. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/arch/x86/percpu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/percpu.c b/xen/arch/x86/percpu.c index 8be4ebddf4..5ea14b6ec3 100644 --- a/xen/arch/x86/percpu.c +++ b/xen/arch/x86/percpu.c @@ -76,7 +76,8 @@ static int cpu_percpu_callback( break; case CPU_UP_CANCELED: case CPU_DEAD: - if ( !park_offline_cpus ) + case CPU_RESUME_FAILED: + if ( !park_offline_cpus && system_state != SYS_STATE_suspend ) free_percpu_area(cpu); break; case CPU_REMOVE: From patchwork Mon Mar 18 13:11:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10857587 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1324A17EF for ; Mon, 18 Mar 2019 13:13:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECB0C29330 for ; Mon, 18 Mar 2019 13:13:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EA9AC2936D; Mon, 18 Mar 2019 13:13:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 592232939F for ; Mon, 18 Mar 2019 13:13:46 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3f-0006Zz-Hl; Mon, 18 Mar 2019 13:12:03 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3e-0006Zc-AK for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 13:12:02 +0000 X-Inumbo-ID: 6a0af8a6-497f-11e9-bc90-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 6a0af8a6-497f-11e9-bc90-bc764e045a96; Mon, 18 Mar 2019 13:12:00 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 960FBAF5B; Mon, 18 Mar 2019 13:11:59 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 18 Mar 2019 14:11:54 +0100 Message-Id: <20190318131155.29450-6-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190318131155.29450-1-jgross@suse.com> References: <20190318131155.29450-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 5/6] xen/cpupool: simplify suspend/resume handling X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Instead of removing cpus temporarily from cpupools during suspend/resume only remove cpus finally which didn't come up when resuming. Signed-off-by: Juergen Gross Reviewed-by: George Dunlap Reviewed-by: Dario Faggioli --- xen/common/cpupool.c | 130 ++++++++++++++++++--------------------------- xen/include/xen/sched-if.h | 1 - 2 files changed, 51 insertions(+), 80 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index e89bb67e71..ed689fd290 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -47,12 +47,6 @@ static struct cpupool *alloc_cpupool_struct(void) xfree(c); c = NULL; } - else if ( !zalloc_cpumask_var(&c->cpu_suspended) ) - { - free_cpumask_var(c->cpu_valid); - xfree(c); - c = NULL; - } return c; } @@ -60,10 +54,7 @@ static struct cpupool *alloc_cpupool_struct(void) static void free_cpupool_struct(struct cpupool *c) { if ( c ) - { - free_cpumask_var(c->cpu_suspended); free_cpumask_var(c->cpu_valid); - } xfree(c); } @@ -477,10 +468,6 @@ void cpupool_rm_domain(struct domain *d) /* * Called to add a cpu to a pool. CPUs being hot-plugged are added to pool0, * as they must have been in there when unplugged. - * - * If, on the other hand, we are adding CPUs because we are resuming (e.g., - * after ACPI S3) we put the cpu back in the pool where it was in prior when - * we suspended. */ static int cpupool_cpu_add(unsigned int cpu) { @@ -490,42 +477,15 @@ static int cpupool_cpu_add(unsigned int cpu) cpumask_clear_cpu(cpu, &cpupool_locked_cpus); cpumask_set_cpu(cpu, &cpupool_free_cpus); - if ( system_state == SYS_STATE_suspend || system_state == SYS_STATE_resume ) - { - struct cpupool **c; - - for_each_cpupool(c) - { - if ( cpumask_test_cpu(cpu, (*c)->cpu_suspended ) ) - { - ret = cpupool_assign_cpu_locked(*c, cpu); - if ( ret ) - goto out; - cpumask_clear_cpu(cpu, (*c)->cpu_suspended); - break; - } - } + /* + * If we are not resuming, we are hot-plugging cpu, and in which case + * we add it to pool0, as it certainly was there when hot-unplagged + * (or unplugging would have failed) and that is the default behavior + * anyway. + */ + per_cpu(cpupool, cpu) = NULL; + ret = cpupool_assign_cpu_locked(cpupool0, cpu); - /* - * Either cpu has been found as suspended in a pool, and added back - * there, or it stayed free (if it did not belong to any pool when - * suspending), and we don't want to do anything. - */ - ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus) || - cpumask_test_cpu(cpu, (*c)->cpu_valid)); - } - else - { - /* - * If we are not resuming, we are hot-plugging cpu, and in which case - * we add it to pool0, as it certainly was there when hot-unplagged - * (or unplugging would have failed) and that is the default behavior - * anyway. - */ - per_cpu(cpupool, cpu) = NULL; - ret = cpupool_assign_cpu_locked(cpupool0, cpu); - } - out: spin_unlock(&cpupool_lock); return ret; @@ -535,42 +495,14 @@ static int cpupool_cpu_add(unsigned int cpu) * Called to remove a CPU from a pool. The CPU is locked, to forbid removing * it from pool0. In fact, if we want to hot-unplug a CPU, it must belong to * pool0, or we fail. - * - * However, if we are suspending (e.g., to ACPI S3), we mark the CPU in such - * a way that it can be put back in its pool when resuming. */ static int cpupool_cpu_remove(unsigned int cpu) { int ret = -ENODEV; spin_lock(&cpupool_lock); - if ( system_state == SYS_STATE_suspend ) - { - struct cpupool **c; - - for_each_cpupool(c) - { - if ( cpumask_test_cpu(cpu, (*c)->cpu_valid ) ) - { - cpumask_set_cpu(cpu, (*c)->cpu_suspended); - cpumask_clear_cpu(cpu, (*c)->cpu_valid); - break; - } - } - /* - * Either we found cpu in a pool, or it must be free (if it has been - * hot-unplagged, then we must have found it in pool0). It is, of - * course, fine to suspend or shutdown with CPUs not assigned to a - * pool, and (in case of suspend) they will stay free when resuming. - */ - ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus) || - cpumask_test_cpu(cpu, (*c)->cpu_suspended)); - ASSERT(cpumask_test_cpu(cpu, &cpu_online_map) || - cpumask_test_cpu(cpu, cpupool0->cpu_suspended)); - ret = 0; - } - else if ( cpumask_test_cpu(cpu, cpupool0->cpu_valid) ) + if ( cpumask_test_cpu(cpu, cpupool0->cpu_valid) ) { /* * If we are not suspending, we are hot-unplugging cpu, and that is @@ -587,6 +519,41 @@ static int cpupool_cpu_remove(unsigned int cpu) return ret; } +/* + * Called during resume for all cpus which didn't come up again. The cpu must + * be removed from the cpupool it is assigned to. In case a cpupool will be + * left without cpu we move all domains of that cpupool to cpupool0. + */ +static void cpupool_cpu_remove_forced(unsigned int cpu) +{ + struct cpupool **c; + struct domain *d; + + spin_lock(&cpupool_lock); + + if ( cpumask_test_cpu(cpu, &cpupool_free_cpus) ) + cpumask_clear_cpu(cpu, &cpupool_free_cpus); + else + { + for_each_cpupool(c) + { + if ( cpumask_test_cpu(cpu, (*c)->cpu_valid) ) + { + cpumask_clear_cpu(cpu, (*c)->cpu_valid); + if ( cpumask_weight((*c)->cpu_valid) == 0 ) + { + if ( *c == cpupool0 ) + panic("No cpu left in cpupool0\n"); + for_each_domain_in_cpupool(d, *c) + cpupool_move_domain_locked(d, cpupool0); + } + } + } + } + + spin_unlock(&cpupool_lock); +} + /* * do cpupool related sysctl operations */ @@ -774,10 +741,15 @@ static int cpu_callback( { case CPU_DOWN_FAILED: case CPU_ONLINE: - rc = cpupool_cpu_add(cpu); + if ( system_state <= SYS_STATE_active ) + rc = cpupool_cpu_add(cpu); break; case CPU_DOWN_PREPARE: - rc = cpupool_cpu_remove(cpu); + if ( system_state <= SYS_STATE_active ) + rc = cpupool_cpu_remove(cpu); + break; + case CPU_RESUME_FAILED: + cpupool_cpu_remove_forced(cpu); break; default: break; diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 9596eae1e2..92bc7a0365 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -214,7 +214,6 @@ struct cpupool { int cpupool_id; cpumask_var_t cpu_valid; /* all cpus assigned to pool */ - cpumask_var_t cpu_suspended; /* cpus in S3 that should be in this pool */ struct cpupool *next; unsigned int n_dom; struct scheduler *sched; From patchwork Mon Mar 18 13:11:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10857597 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 391121390 for ; Mon, 18 Mar 2019 13:14:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1D656290C6 for ; Mon, 18 Mar 2019 13:14:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1122D2916C; Mon, 18 Mar 2019 13:14:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D67E129192 for ; Mon, 18 Mar 2019 13:14:01 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3h-0006ar-QP; Mon, 18 Mar 2019 13:12:05 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5s3f-0006Zy-JT for xen-devel@lists.xenproject.org; Mon, 18 Mar 2019 13:12:03 +0000 X-Inumbo-ID: 6adf7291-497f-11e9-bc90-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 6adf7291-497f-11e9-bc90-bc764e045a96; Mon, 18 Mar 2019 13:12:02 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 172ACAF68; Mon, 18 Mar 2019 13:12:01 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 18 Mar 2019 14:11:55 +0100 Message-Id: <20190318131155.29450-7-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190318131155.29450-1-jgross@suse.com> References: <20190318131155.29450-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 6/6] xen/sched: don't disable scheduler on cpus during suspend X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Today there is special handling in cpu_disable_scheduler() for suspend by forcing all vcpus to the boot cpu. In fact there is no need for that as during resume the vcpus are put on the correct cpus again. So we can just omit the call of cpu_disable_scheduler() when offlining a cpu due to suspend and on resuming we can omit taking the schedule lock for selecting the new processor. In restore_vcpu_affinity() we should be careful when applying affinity as the cpu might not have come back to life. This in turn enables us to even support affinity_broken across suspend/resume. Avoid all other scheduler dealloc - alloc dance when doing suspend and resume, too. It is enough to react on cpus failing to come up on resume again. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/schedule.c | 161 ++++++++++++++++---------------------------------- 1 file changed, 52 insertions(+), 109 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 665747f247..8a8598c7ad 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -560,33 +560,6 @@ static void vcpu_move_locked(struct vcpu *v, unsigned int new_cpu) v->processor = new_cpu; } -/* - * Move a vcpu from its current processor to a target new processor, - * without asking the scheduler to do any placement. This is intended - * for being called from special contexts, where things are quiet - * enough that no contention is supposed to happen (i.e., during - * shutdown or software suspend, like ACPI S3). - */ -static void vcpu_move_nosched(struct vcpu *v, unsigned int new_cpu) -{ - unsigned long flags; - spinlock_t *lock, *new_lock; - - ASSERT(system_state == SYS_STATE_suspend); - ASSERT(!vcpu_runnable(v) && (atomic_read(&v->pause_count) || - atomic_read(&v->domain->pause_count))); - - lock = per_cpu(schedule_data, v->processor).schedule_lock; - new_lock = per_cpu(schedule_data, new_cpu).schedule_lock; - - sched_spin_lock_double(lock, new_lock, &flags); - ASSERT(new_cpu != v->processor); - vcpu_move_locked(v, new_cpu); - sched_spin_unlock_double(lock, new_lock, flags); - - sched_move_irqs(v); -} - /* * Initiating migration * @@ -735,31 +708,36 @@ void restore_vcpu_affinity(struct domain *d) ASSERT(!vcpu_runnable(v)); - lock = vcpu_schedule_lock_irq(v); - - if ( v->affinity_broken ) - { - sched_set_affinity(v, v->cpu_hard_affinity_saved, NULL); - v->affinity_broken = 0; - - } - /* - * During suspend (in cpu_disable_scheduler()), we moved every vCPU - * to BSP (which, as of now, is pCPU 0), as a temporary measure to - * allow the nonboot processors to have their data structure freed - * and go to sleep. But nothing guardantees that the BSP is a valid - * pCPU for a particular domain. + * Re-assign the initial processor as after resume we have no + * guarantee the old processor has come back to life again. * * Therefore, here, before actually unpausing the domains, we should * set v->processor of each of their vCPUs to something that will * make sense for the scheduler of the cpupool in which they are in. */ cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, - cpupool_domain_cpumask(v->domain)); - v->processor = cpumask_any(cpumask_scratch_cpu(cpu)); + cpupool_domain_cpumask(d)); + if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) + { + if ( v->affinity_broken ) + { + sched_set_affinity(v, v->cpu_hard_affinity_saved, NULL); + v->affinity_broken = 0; + cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, + cpupool_domain_cpumask(d)); + } - spin_unlock_irq(lock); + if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) + { + printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v); + sched_set_affinity(v, &cpumask_all, NULL); + cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, + cpupool_domain_cpumask(d)); + } + } + + v->processor = cpumask_any(cpumask_scratch_cpu(cpu)); lock = vcpu_schedule_lock_irq(v); v->processor = SCHED_OP(vcpu_scheduler(v), pick_cpu, v); @@ -783,7 +761,6 @@ int cpu_disable_scheduler(unsigned int cpu) struct vcpu *v; struct cpupool *c; cpumask_t online_affinity; - unsigned int new_cpu; int ret = 0; c = per_cpu(cpupool, cpu); @@ -809,14 +786,7 @@ int cpu_disable_scheduler(unsigned int cpu) break; } - if (system_state == SYS_STATE_suspend) - { - cpumask_copy(v->cpu_hard_affinity_saved, - v->cpu_hard_affinity); - v->affinity_broken = 1; - } - else - printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v); + printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v); sched_set_affinity(v, &cpumask_all, NULL); } @@ -828,60 +798,26 @@ int cpu_disable_scheduler(unsigned int cpu) continue; } - /* If it is on this cpu, we must send it away. */ - if ( unlikely(system_state == SYS_STATE_suspend) ) - { - vcpu_schedule_unlock_irqrestore(lock, flags, v); - - /* - * If we are doing a shutdown/suspend, it is not necessary to - * ask the scheduler to chime in. In fact: - * * there is no reason for it: the end result we are after - * is just 'all the vcpus on the boot pcpu, and no vcpu - * anywhere else', so let's just go for it; - * * it's wrong, for cpupools with only non-boot pcpus, as - * the scheduler would always fail to send the vcpus away - * from the last online (non boot) pcpu! - * - * Therefore, in the shutdown/suspend case, we just pick up - * one (still) online pcpu. Note that, at this stage, all - * domains (including dom0) have been paused already, so we - * do not expect any vcpu activity at all. - */ - cpumask_andnot(&online_affinity, &cpu_online_map, - cpumask_of(cpu)); - BUG_ON(cpumask_empty(&online_affinity)); - /* - * As boot cpu is, usually, pcpu #0, using cpumask_first() - * will make us converge quicker. - */ - new_cpu = cpumask_first(&online_affinity); - vcpu_move_nosched(v, new_cpu); - } - else - { - /* - * OTOH, if the system is still live, and we are here because - * we are doing some cpupool manipulations: - * * we want to call the scheduler, and let it re-evaluation - * the placement of the vcpu, taking into account the new - * cpupool configuration; - * * the scheduler will always fine a suitable solution, or - * things would have failed before getting in here. - */ - vcpu_migrate_start(v); - vcpu_schedule_unlock_irqrestore(lock, flags, v); + /* If it is on this cpu, we must send it away. + * We are doing some cpupool manipulations: + * * we want to call the scheduler, and let it re-evaluation + * the placement of the vcpu, taking into account the new + * cpupool configuration; + * * the scheduler will always find a suitable solution, or + * things would have failed before getting in here. + */ + vcpu_migrate_start(v); + vcpu_schedule_unlock_irqrestore(lock, flags, v); - vcpu_migrate_finish(v); + vcpu_migrate_finish(v); - /* - * The only caveat, in this case, is that if a vcpu active in - * the hypervisor isn't migratable. In this case, the caller - * should try again after releasing and reaquiring all locks. - */ - if ( v->processor == cpu ) - ret = -EAGAIN; - } + /* + * The only caveat, in this case, is that if a vcpu active in + * the hypervisor isn't migratable. In this case, the caller + * should try again after releasing and reaquiring all locks. + */ + if ( v->processor == cpu ) + ret = -EAGAIN; } } @@ -1727,20 +1663,27 @@ static int cpu_schedule_callback( switch ( action ) { case CPU_STARTING: - SCHED_OP(sched, init_pdata, sd->sched_priv, cpu); + if ( system_state != SYS_STATE_resume ) + SCHED_OP(sched, init_pdata, sd->sched_priv, cpu); break; case CPU_UP_PREPARE: - rc = cpu_schedule_up(cpu); + if ( system_state != SYS_STATE_resume ) + rc = cpu_schedule_up(cpu); break; + case CPU_RESUME_FAILED: case CPU_DEAD: + if ( system_state == SYS_STATE_suspend ) + break; rcu_read_lock(&domlist_read_lock); cpu_disable_scheduler(cpu); rcu_read_unlock(&domlist_read_lock); SCHED_OP(sched, deinit_pdata, sd->sched_priv, cpu); - /* Fallthrough */ - case CPU_UP_CANCELED: cpu_schedule_down(cpu); break; + case CPU_UP_CANCELED: + if ( system_state != SYS_STATE_resume ) + cpu_schedule_down(cpu); + break; default: break; }