diff mbox series

[v2,3/3] xen/sched: fix latent races accessing vcpu->dirty_cpu

Message ID 20200511112829.5500-4-jgross@suse.com (mailing list archive)
State Superseded
Headers show
Series xen: Fix some bugs in scheduling | expand

Commit Message

Jürgen Groß May 11, 2020, 11:28 a.m. UTC
The dirty_cpu field of struct vcpu denotes which cpu still holds data
of a vcpu. All accesses to this field should be atomic in case the
vcpu could just be running, as it is accessed without any lock held
in most cases. Especially sync_local_execstate() and context_switch()
for the same vcpu running concurrently have a risk for failing.

There are some instances where accesses are not atomically done, and
even worse where multiple accesses are done when a single one would
be mandated.

Correct that in order to avoid potential problems.

Add some assertions to verify dirty_cpu is handled properly.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- convert all accesses to v->dirty_cpu to atomic ones (Jan Beulich)
- drop cast (Julien Grall)
---
 xen/arch/x86/domain.c   | 16 +++++++++++-----
 xen/common/domain.c     |  2 +-
 xen/common/keyhandler.c |  2 +-
 xen/include/xen/sched.h |  2 +-
 4 files changed, 14 insertions(+), 8 deletions(-)

Comments

Jan Beulich May 14, 2020, 7:10 a.m. UTC | #1
On 11.05.2020 13:28, Juergen Gross wrote:
> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>  
>  void sync_vcpu_execstate(struct vcpu *v)
>  {
> -    if ( v->dirty_cpu == smp_processor_id() )
> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
> +
> +    if ( dirty_cpu == smp_processor_id() )
>          sync_local_execstate();
> -    else if ( vcpu_cpu_dirty(v) )
> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>      {
>          /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>      }
> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
> +           read_atomic(&v->dirty_cpu) != dirty_cpu);

Repeating my v1.1 comments:

"However, having stared at it for a while now - is this race
 free? I can see this being fine in the (initial) case of
 dirty_cpu == smp_processor_id(), but if this is for a foreign
 CPU, can't the vCPU have gone back to that same CPU again in
 the meantime?"

and later

"There is a time window from late in flush_mask() to the assertion
 you add. All sorts of things can happen during this window on
 other CPUs. IOW what guarantees the vCPU not getting unpaused or
 its affinity getting changed yet another time?"

You did reply that by what is now patch 2 this race can be
eliminated, but I have to admit I don't see why this would be.
Hence at the very least I'd expect justification in either the
description or a code comment as to why there's no race left
(and also no race to be expected to be re-introduced by code
changes elsewhere - very unlikely races are, by their nature,
unlikely to be hit during code development and the associated
testing, hence I'd like there to be sufficiently close to a
guarantee here).

My reservations here may in part be due to not following the
reasoning for patch 2, which therefore I'll have to rely on the
scheduler maintainers to judge on.

> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -158,7 +158,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>  
>      v->domain = d;
>      v->vcpu_id = vcpu_id;
> -    v->dirty_cpu = VCPU_CPU_CLEAN;
> +    write_atomic(&v->dirty_cpu, VCPU_CPU_CLEAN);

This is not strictly necessary (the vCPU won't be acted upon by
any other entity in the system just yet), and with this I'd like
to suggest to drop this change again.

Jan
Jürgen Groß May 14, 2020, 8:50 a.m. UTC | #2
On 14.05.20 09:10, Jan Beulich wrote:
> On 11.05.2020 13:28, Juergen Gross wrote:
>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>   
>>   void sync_vcpu_execstate(struct vcpu *v)
>>   {
>> -    if ( v->dirty_cpu == smp_processor_id() )
>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>> +
>> +    if ( dirty_cpu == smp_processor_id() )
>>           sync_local_execstate();
>> -    else if ( vcpu_cpu_dirty(v) )
>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>       {
>>           /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>       }
>> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
>> +           read_atomic(&v->dirty_cpu) != dirty_cpu);
> 
> Repeating my v1.1 comments:
> 
> "However, having stared at it for a while now - is this race
>   free? I can see this being fine in the (initial) case of
>   dirty_cpu == smp_processor_id(), but if this is for a foreign
>   CPU, can't the vCPU have gone back to that same CPU again in
>   the meantime?"
> 
> and later
> 
> "There is a time window from late in flush_mask() to the assertion
>   you add. All sorts of things can happen during this window on
>   other CPUs. IOW what guarantees the vCPU not getting unpaused or
>   its affinity getting changed yet another time?"
> 
> You did reply that by what is now patch 2 this race can be
> eliminated, but I have to admit I don't see why this would be.
> Hence at the very least I'd expect justification in either the
> description or a code comment as to why there's no race left
> (and also no race to be expected to be re-introduced by code
> changes elsewhere - very unlikely races are, by their nature,
> unlikely to be hit during code development and the associated
> testing, hence I'd like there to be sufficiently close to a
> guarantee here).
> 
> My reservations here may in part be due to not following the
> reasoning for patch 2, which therefore I'll have to rely on the
> scheduler maintainers to judge on.

sync_vcpu_execstate() isn't called for a running or runnable vcpu any
longer. I can add an ASSERT() and a comment explaining it if you like
that better.

> 
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -158,7 +158,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>>   
>>       v->domain = d;
>>       v->vcpu_id = vcpu_id;
>> -    v->dirty_cpu = VCPU_CPU_CLEAN;
>> +    write_atomic(&v->dirty_cpu, VCPU_CPU_CLEAN);
> 
> This is not strictly necessary (the vCPU won't be acted upon by
> any other entity in the system just yet), and with this I'd like
> to suggest to drop this change again.

Fine with me.


Juergen
Jan Beulich May 14, 2020, 9:24 a.m. UTC | #3
On 14.05.2020 10:50, Jürgen Groß wrote:
> On 14.05.20 09:10, Jan Beulich wrote:
>> On 11.05.2020 13:28, Juergen Gross wrote:
>>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>>     void sync_vcpu_execstate(struct vcpu *v)
>>>   {
>>> -    if ( v->dirty_cpu == smp_processor_id() )
>>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>>> +
>>> +    if ( dirty_cpu == smp_processor_id() )
>>>           sync_local_execstate();
>>> -    else if ( vcpu_cpu_dirty(v) )
>>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>>       {
>>>           /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>>       }
>>> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
>>> +           read_atomic(&v->dirty_cpu) != dirty_cpu);
>>
>> Repeating my v1.1 comments:
>>
>> "However, having stared at it for a while now - is this race
>>   free? I can see this being fine in the (initial) case of
>>   dirty_cpu == smp_processor_id(), but if this is for a foreign
>>   CPU, can't the vCPU have gone back to that same CPU again in
>>   the meantime?"
>>
>> and later
>>
>> "There is a time window from late in flush_mask() to the assertion
>>   you add. All sorts of things can happen during this window on
>>   other CPUs. IOW what guarantees the vCPU not getting unpaused or
>>   its affinity getting changed yet another time?"
>>
>> You did reply that by what is now patch 2 this race can be
>> eliminated, but I have to admit I don't see why this would be.
>> Hence at the very least I'd expect justification in either the
>> description or a code comment as to why there's no race left
>> (and also no race to be expected to be re-introduced by code
>> changes elsewhere - very unlikely races are, by their nature,
>> unlikely to be hit during code development and the associated
>> testing, hence I'd like there to be sufficiently close to a
>> guarantee here).
>>
>> My reservations here may in part be due to not following the
>> reasoning for patch 2, which therefore I'll have to rely on the
>> scheduler maintainers to judge on.
> 
> sync_vcpu_execstate() isn't called for a running or runnable vcpu any
> longer. I can add an ASSERT() and a comment explaining it if you like
> that better.

This would help (hopefully people adding new uses of the function
would run into this assertion/comment), but for example the uses
in mapcache_current_vcpu() or do_tasklet_work() look to be pretty
hard to prove they can't happen for a runnable vCPU.

Jan
Jürgen Groß May 14, 2020, 9:29 a.m. UTC | #4
On 14.05.20 11:24, Jan Beulich wrote:
> On 14.05.2020 10:50, Jürgen Groß wrote:
>> On 14.05.20 09:10, Jan Beulich wrote:
>>> On 11.05.2020 13:28, Juergen Gross wrote:
>>>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>>>      void sync_vcpu_execstate(struct vcpu *v)
>>>>    {
>>>> -    if ( v->dirty_cpu == smp_processor_id() )
>>>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>>>> +
>>>> +    if ( dirty_cpu == smp_processor_id() )
>>>>            sync_local_execstate();
>>>> -    else if ( vcpu_cpu_dirty(v) )
>>>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>>>        {
>>>>            /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>>>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>>>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>>>        }
>>>> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
>>>> +           read_atomic(&v->dirty_cpu) != dirty_cpu);
>>>
>>> Repeating my v1.1 comments:
>>>
>>> "However, having stared at it for a while now - is this race
>>>    free? I can see this being fine in the (initial) case of
>>>    dirty_cpu == smp_processor_id(), but if this is for a foreign
>>>    CPU, can't the vCPU have gone back to that same CPU again in
>>>    the meantime?"
>>>
>>> and later
>>>
>>> "There is a time window from late in flush_mask() to the assertion
>>>    you add. All sorts of things can happen during this window on
>>>    other CPUs. IOW what guarantees the vCPU not getting unpaused or
>>>    its affinity getting changed yet another time?"
>>>
>>> You did reply that by what is now patch 2 this race can be
>>> eliminated, but I have to admit I don't see why this would be.
>>> Hence at the very least I'd expect justification in either the
>>> description or a code comment as to why there's no race left
>>> (and also no race to be expected to be re-introduced by code
>>> changes elsewhere - very unlikely races are, by their nature,
>>> unlikely to be hit during code development and the associated
>>> testing, hence I'd like there to be sufficiently close to a
>>> guarantee here).
>>>
>>> My reservations here may in part be due to not following the
>>> reasoning for patch 2, which therefore I'll have to rely on the
>>> scheduler maintainers to judge on.
>>
>> sync_vcpu_execstate() isn't called for a running or runnable vcpu any
>> longer. I can add an ASSERT() and a comment explaining it if you like
>> that better.
> 
> This would help (hopefully people adding new uses of the function
> would run into this assertion/comment), but for example the uses
> in mapcache_current_vcpu() or do_tasklet_work() look to be pretty
> hard to prove they can't happen for a runnable vCPU.

Those call sync_local_execstate(), not sync_vcpu_execstate().


Juergen
Jan Beulich May 14, 2020, 1:58 p.m. UTC | #5
On 14.05.2020 11:29, Jürgen Groß wrote:
> On 14.05.20 11:24, Jan Beulich wrote:
>> On 14.05.2020 10:50, Jürgen Groß wrote:
>>> On 14.05.20 09:10, Jan Beulich wrote:
>>>> On 11.05.2020 13:28, Juergen Gross wrote:
>>>>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>>>>      void sync_vcpu_execstate(struct vcpu *v)
>>>>>    {
>>>>> -    if ( v->dirty_cpu == smp_processor_id() )
>>>>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>>>>> +
>>>>> +    if ( dirty_cpu == smp_processor_id() )
>>>>>            sync_local_execstate();
>>>>> -    else if ( vcpu_cpu_dirty(v) )
>>>>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>>>>        {
>>>>>            /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>>>>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>>>>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>>>>        }
>>>>> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
>>>>> +           read_atomic(&v->dirty_cpu) != dirty_cpu);
>>>>
>>>> Repeating my v1.1 comments:
>>>>
>>>> "However, having stared at it for a while now - is this race
>>>>    free? I can see this being fine in the (initial) case of
>>>>    dirty_cpu == smp_processor_id(), but if this is for a foreign
>>>>    CPU, can't the vCPU have gone back to that same CPU again in
>>>>    the meantime?"
>>>>
>>>> and later
>>>>
>>>> "There is a time window from late in flush_mask() to the assertion
>>>>    you add. All sorts of things can happen during this window on
>>>>    other CPUs. IOW what guarantees the vCPU not getting unpaused or
>>>>    its affinity getting changed yet another time?"
>>>>
>>>> You did reply that by what is now patch 2 this race can be
>>>> eliminated, but I have to admit I don't see why this would be.
>>>> Hence at the very least I'd expect justification in either the
>>>> description or a code comment as to why there's no race left
>>>> (and also no race to be expected to be re-introduced by code
>>>> changes elsewhere - very unlikely races are, by their nature,
>>>> unlikely to be hit during code development and the associated
>>>> testing, hence I'd like there to be sufficiently close to a
>>>> guarantee here).
>>>>
>>>> My reservations here may in part be due to not following the
>>>> reasoning for patch 2, which therefore I'll have to rely on the
>>>> scheduler maintainers to judge on.
>>>
>>> sync_vcpu_execstate() isn't called for a running or runnable vcpu any
>>> longer. I can add an ASSERT() and a comment explaining it if you like
>>> that better.
>>
>> This would help (hopefully people adding new uses of the function
>> would run into this assertion/comment), but for example the uses
>> in mapcache_current_vcpu() or do_tasklet_work() look to be pretty
>> hard to prove they can't happen for a runnable vCPU.
> 
> Those call sync_local_execstate(), not sync_vcpu_execstate().

Ouch, as said on the other sub-thread - I'm sorry for mixing those up.

Jan
diff mbox series

Patch

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a4428190d5..2e5717b983 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -183,7 +183,7 @@  void startup_cpu_idle_loop(void)
 
     ASSERT(is_idle_vcpu(v));
     cpumask_set_cpu(v->processor, v->domain->dirty_cpumask);
-    v->dirty_cpu = v->processor;
+    write_atomic(&v->dirty_cpu, v->processor);
 
     reset_stack_and_jump(idle_loop);
 }
@@ -1769,6 +1769,7 @@  static void __context_switch(void)
 
     if ( !is_idle_domain(pd) )
     {
+        ASSERT(read_atomic(&p->dirty_cpu) == cpu);
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
         pd->arch.ctxt_switch->from(p);
@@ -1832,7 +1833,7 @@  void context_switch(struct vcpu *prev, struct vcpu *next)
 {
     unsigned int cpu = smp_processor_id();
     const struct domain *prevd = prev->domain, *nextd = next->domain;
-    unsigned int dirty_cpu = next->dirty_cpu;
+    unsigned int dirty_cpu = read_atomic(&next->dirty_cpu);
 
     ASSERT(prev != next);
     ASSERT(local_irq_is_enabled());
@@ -1844,6 +1845,7 @@  void context_switch(struct vcpu *prev, struct vcpu *next)
     {
         /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
         flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
+        ASSERT(!vcpu_cpu_dirty(next));
     }
 
     _update_runstate_area(prev);
@@ -1956,13 +1958,17 @@  void sync_local_execstate(void)
 
 void sync_vcpu_execstate(struct vcpu *v)
 {
-    if ( v->dirty_cpu == smp_processor_id() )
+    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
+
+    if ( dirty_cpu == smp_processor_id() )
         sync_local_execstate();
-    else if ( vcpu_cpu_dirty(v) )
+    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
     {
         /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
-        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
+        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
     }
+    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
+           read_atomic(&v->dirty_cpu) != dirty_cpu);
 }
 
 static int relinquish_memory(
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7cc9526139..70ff05eefc 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -158,7 +158,7 @@  struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
 
     v->domain = d;
     v->vcpu_id = vcpu_id;
-    v->dirty_cpu = VCPU_CPU_CLEAN;
+    write_atomic(&v->dirty_cpu, VCPU_CPU_CLEAN);
 
     spin_lock_init(&v->virq_lock);
 
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 87bd145374..68364e987d 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -316,7 +316,7 @@  static void dump_domains(unsigned char key)
                        vcpu_info(v, evtchn_upcall_pending),
                        !vcpu_event_delivery_is_enabled(v));
                 if ( vcpu_cpu_dirty(v) )
-                    printk("dirty_cpu=%u", v->dirty_cpu);
+                    printk("dirty_cpu=%u", read_atomic(&v->dirty_cpu));
                 printk("\n");
                 printk("    pause_count=%d pause_flags=%lx\n",
                        atomic_read(&v->pause_count), v->pause_flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 6101761d25..ac53519d7f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -844,7 +844,7 @@  static inline bool is_vcpu_dirty_cpu(unsigned int cpu)
 
 static inline bool vcpu_cpu_dirty(const struct vcpu *v)
 {
-    return is_vcpu_dirty_cpu(v->dirty_cpu);
+    return is_vcpu_dirty_cpu(read_atomic(&v->dirty_cpu));
 }
 
 void vcpu_block(void);