diff mbox

[5/5] xen: sched: simplify ACPI S3 resume path.

Message ID 148467403070.27920.13927682504995274110.stgit@Solace.fritz.box (mailing list archive)
State New, archived
Headers show

Commit Message

Dario Faggioli Jan. 17, 2017, 5:27 p.m. UTC
In fact, when domains are being unpaused:
 - it's not necessary to put the vcpu to sleep, as
   they are all already paused;
 - it is not necessary to call vcpu_migrate(), as
   the vcpus are still paused, and therefore won't
   wakeup anyway.

Basically, the only important thing is to call
pick_cpu, to let the scheduler run and figure out
what would be the best initial placement (i.e., the
value stored in v->processor), for the vcpus, as
they come back up, one after another.

Note that this is consistent with what was happening
before this change, as vcpu_migrate() calls pick_cpu.
But much simpler and quicker.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/common/schedule.c |   22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

Comments

George Dunlap Jan. 23, 2017, 3:52 p.m. UTC | #1
On Tue, Jan 17, 2017 at 5:27 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> In fact, when domains are being unpaused:
>  - it's not necessary to put the vcpu to sleep, as
>    they are all already paused;
>  - it is not necessary to call vcpu_migrate(), as
>    the vcpus are still paused, and therefore won't
>    wakeup anyway.
>
> Basically, the only important thing is to call
> pick_cpu, to let the scheduler run and figure out
> what would be the best initial placement (i.e., the
> value stored in v->processor), for the vcpus, as
> they come back up, one after another.
>
> Note that this is consistent with what was happening
> before this change, as vcpu_migrate() calls pick_cpu.
> But much simpler and quicker.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  xen/common/schedule.c |   22 ++++++++++------------
>  1 file changed, 10 insertions(+), 12 deletions(-)
>
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index bee5d1f..43b5b99 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -635,7 +635,11 @@ void restore_vcpu_affinity(struct domain *d)
>
>      for_each_vcpu ( d, v )
>      {
> -        spinlock_t *lock = vcpu_schedule_lock_irq(v);
> +        spinlock_t *lock;
> +
> +        ASSERT(!vcpu_runnable(v));
> +
> +        lock = vcpu_schedule_lock_irq(v);
>
>          if ( v->affinity_broken )
>          {
> @@ -659,17 +663,11 @@ void restore_vcpu_affinity(struct domain *d)
>                      cpupool_domain_cpumask(v->domain));
>          v->processor = cpumask_any(cpumask_scratch_cpu(cpu));
>
> -        if ( v->processor == cpu )
> -        {
> -            set_bit(_VPF_migrating, &v->pause_flags);
> -            spin_unlock_irq(lock);;
> -            vcpu_sleep_nosync(v);
> -            vcpu_migrate(v);
> -        }
> -        else
> -        {
> -            spin_unlock_irq(lock);
> -        }
> +        spin_unlock_irq(lock);;
> +
> +        lock = vcpu_schedule_lock_irq(v);
> +        v->processor = SCHED_OP(VCPU2OP(v), pick_cpu, v);
> +        spin_unlock_irq(lock);
>      }
>
>      domain_update_node_affinity(d);
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
diff mbox

Patch

diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index bee5d1f..43b5b99 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -635,7 +635,11 @@  void restore_vcpu_affinity(struct domain *d)
 
     for_each_vcpu ( d, v )
     {
-        spinlock_t *lock = vcpu_schedule_lock_irq(v);
+        spinlock_t *lock;
+
+        ASSERT(!vcpu_runnable(v));
+
+        lock = vcpu_schedule_lock_irq(v);
 
         if ( v->affinity_broken )
         {
@@ -659,17 +663,11 @@  void restore_vcpu_affinity(struct domain *d)
                     cpupool_domain_cpumask(v->domain));
         v->processor = cpumask_any(cpumask_scratch_cpu(cpu));
 
-        if ( v->processor == cpu )
-        {
-            set_bit(_VPF_migrating, &v->pause_flags);
-            spin_unlock_irq(lock);;
-            vcpu_sleep_nosync(v);
-            vcpu_migrate(v);
-        }
-        else
-        {
-            spin_unlock_irq(lock);
-        }
+        spin_unlock_irq(lock);;
+
+        lock = vcpu_schedule_lock_irq(v);
+        v->processor = SCHED_OP(VCPU2OP(v), pick_cpu, v);
+        spin_unlock_irq(lock);
     }
 
     domain_update_node_affinity(d);