From patchwork Fri Aug 9 14:58:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11086711 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7467B1709 for ; Fri, 9 Aug 2019 15:01:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 629AF1FE8E for ; Fri, 9 Aug 2019 15:01:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 56F6E1FF87; Fri, 9 Aug 2019 15:01:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E73911FEBD for ; Fri, 9 Aug 2019 15:00:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6Mb-0007t1-LA; Fri, 09 Aug 2019 14:59:29 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6M7-0006l0-4x for xen-devel@lists.xenproject.org; Fri, 09 Aug 2019 14:58:59 +0000 X-Inumbo-ID: 353b9e9e-bab6-11e9-aee5-2716450258bd Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 353b9e9e-bab6-11e9-aee5-2716450258bd; Fri, 09 Aug 2019 14:58:55 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D9081AFCE; Fri, 9 Aug 2019 14:58:54 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 9 Aug 2019 16:58:32 +0200 Message-Id: <20190809145833.1020-48-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190809145833.1020-1-jgross@suse.com> References: <20190809145833.1020-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 47/48] xen/sched: disable scheduling when entering ACPI deep sleep states X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Dario Faggioli , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP When entering deep sleep states all domains are paused resulting in all cpus only running idle vcpus. This enables us to stop scheduling completely in order to avoid synchronization problems with core scheduling when individual cpus are offlined. Disabling the scheduler is done by replacing the softirq handler with a dummy scheduling routine only enabling tasklets to run. Signed-off-by: Juergen Gross --- V2: new patch --- xen/arch/x86/acpi/power.c | 4 ++++ xen/common/schedule.c | 31 +++++++++++++++++++++++++++++-- xen/include/xen/sched.h | 2 ++ 3 files changed, 35 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c index aecc754fdb..431db8dca8 100644 --- a/xen/arch/x86/acpi/power.c +++ b/xen/arch/x86/acpi/power.c @@ -122,12 +122,16 @@ static void freeze_domains(void) for_each_domain ( d ) domain_pause(d); rcu_read_unlock(&domlist_read_lock); + + scheduler_disable(); } static void thaw_domains(void) { struct domain *d; + scheduler_enable(); + rcu_read_lock(&domlist_read_lock); for_each_domain ( d ) { diff --git a/xen/common/schedule.c b/xen/common/schedule.c index e0521de8ce..181adb00b2 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -84,6 +84,8 @@ extern const struct scheduler *__start_schedulers_array[], *__end_schedulers_arr static struct scheduler __read_mostly ops; +static bool scheduler_active; + static struct sched_resource * sched_idle_res_pick(const struct scheduler *ops, struct sched_unit *unit) { @@ -2230,6 +2232,13 @@ static struct sched_unit *sched_wait_rendezvous_in(struct sched_unit *prev, cpu_relax(); *lock = pcpu_schedule_lock_irq(cpu); + + if ( unlikely(!scheduler_active) ) + { + ASSERT(is_idle_unit(prev)); + atomic_set(&prev->next_task->rendezvous_out_cnt, 0); + prev->rendezvous_in_cnt = 0; + } } return prev->next_task; @@ -2578,14 +2587,32 @@ const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu) return mask; } +static void schedule_dummy(void) +{ + sched_tasklet_check_cpu(smp_processor_id()); +} + +void scheduler_disable(void) +{ + scheduler_active = false; + open_softirq(SCHEDULE_SOFTIRQ, schedule_dummy); + open_softirq(SCHED_SLAVE_SOFTIRQ, schedule_dummy); +} + +void scheduler_enable(void) +{ + open_softirq(SCHEDULE_SOFTIRQ, schedule); + open_softirq(SCHED_SLAVE_SOFTIRQ, sched_slave); + scheduler_active = true; +} + /* Initialise the data structures. */ void __init scheduler_init(void) { struct domain *idle_domain; int i; - open_softirq(SCHEDULE_SOFTIRQ, schedule); - open_softirq(SCHED_SLAVE_SOFTIRQ, sched_slave); + scheduler_enable(); for ( i = 0; i < NUM_SCHEDULERS; i++) { diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 346e564e05..6d0ea1f60b 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -926,6 +926,8 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate); uint64_t get_cpu_idle_time(unsigned int cpu); bool sched_has_urgent_vcpu(void); void sched_guest_idle(void (*idle) (void), unsigned int cpu); +void scheduler_enable(void); +void scheduler_disable(void); /* * Used by idle loop to decide whether there is work to do: