From patchwork Fri Sep 27 07:00:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11163981 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19DE91800 for ; Fri, 27 Sep 2019 07:04:20 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F3BC2207FF for ; Fri, 27 Sep 2019 07:04:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F3BC2207FF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkHI-0005pu-43; Fri, 27 Sep 2019 07:02:56 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkHH-0005o5-78 for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:02:55 +0000 X-Inumbo-ID: 9563c134-e0f4-11e9-966c-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 9563c134-e0f4-11e9-966c-12813bfff9fa; Fri, 27 Sep 2019 07:01:09 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5AD60B052; Fri, 27 Sep 2019 07:01:08 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:49 +0200 Message-Id: <20190927070050.12405-46-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 45/46] xen/sched: disable scheduling when entering ACPI deep sleep states X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Dario Faggioli , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" When entering deep sleep states all domains are paused resulting in all cpus only running idle vcpus. This enables us to stop scheduling completely in order to avoid synchronization problems with core scheduling when individual cpus are offlined. Disabling the scheduler is done by replacing the softirq handler with a dummy scheduling routine only enabling tasklets to run. Signed-off-by: Juergen Gross Acked-by: Jan Beulich Reviewed-by: Dario Faggioli --- V2: new patch --- xen/arch/x86/acpi/power.c | 4 ++++ xen/common/schedule.c | 31 +++++++++++++++++++++++++++++-- xen/include/xen/sched.h | 2 ++ 3 files changed, 35 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c index 269b1408d4..47a6c47bbf 100644 --- a/xen/arch/x86/acpi/power.c +++ b/xen/arch/x86/acpi/power.c @@ -145,12 +145,16 @@ static void freeze_domains(void) for_each_domain ( d ) domain_pause(d); rcu_read_unlock(&domlist_read_lock); + + scheduler_disable(); } static void thaw_domains(void) { struct domain *d; + scheduler_enable(); + rcu_read_lock(&domlist_read_lock); for_each_domain ( d ) { diff --git a/xen/common/schedule.c b/xen/common/schedule.c index d2133558c8..ac840b9dfd 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -89,6 +89,8 @@ extern const struct scheduler *__start_schedulers_array[], *__end_schedulers_arr static struct scheduler __read_mostly ops; +static bool scheduler_active; + static void sched_set_affinity( struct sched_unit *unit, const cpumask_t *hard, const cpumask_t *soft); @@ -2260,6 +2262,13 @@ static struct sched_unit *sched_wait_rendezvous_in(struct sched_unit *prev, cpu_relax(); *lock = pcpu_schedule_lock_irq(cpu); + + if ( unlikely(!scheduler_active) ) + { + ASSERT(is_idle_unit(prev)); + atomic_set(&prev->next_task->rendezvous_out_cnt, 0); + prev->rendezvous_in_cnt = 0; + } } return prev->next_task; @@ -2614,14 +2623,32 @@ const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu) return mask; } +static void schedule_dummy(void) +{ + sched_tasklet_check_cpu(smp_processor_id()); +} + +void scheduler_disable(void) +{ + scheduler_active = false; + open_softirq(SCHEDULE_SOFTIRQ, schedule_dummy); + open_softirq(SCHED_SLAVE_SOFTIRQ, schedule_dummy); +} + +void scheduler_enable(void) +{ + open_softirq(SCHEDULE_SOFTIRQ, schedule); + open_softirq(SCHED_SLAVE_SOFTIRQ, sched_slave); + scheduler_active = true; +} + /* Initialise the data structures. */ void __init scheduler_init(void) { struct domain *idle_domain; int i; - open_softirq(SCHEDULE_SOFTIRQ, schedule); - open_softirq(SCHED_SLAVE_SOFTIRQ, sched_slave); + scheduler_enable(); for ( i = 0; i < NUM_SCHEDULERS; i++) { diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index a40bd5fb56..629a4c52e0 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -933,6 +933,8 @@ void restore_vcpu_affinity(struct domain *d); void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate); uint64_t get_cpu_idle_time(unsigned int cpu); void sched_guest_idle(void (*idle) (void), unsigned int cpu); +void scheduler_enable(void); +void scheduler_disable(void); /* * Used by idle loop to decide whether there is work to do: