diff mbox

[v3,3/6] VMX: Cleanup PI per-cpu blocking list when vcpu is destroyed

Message ID 1472615791-8664-4-git-send-email-feng.wu@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wu, Feng Aug. 31, 2016, 3:56 a.m. UTC
We should remove the vCPU from the per-cpu blocking list
if it is going to be destroyed.

Signed-off-by: Feng Wu <feng.wu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Dario Faggioli Sept. 6, 2016, 9:21 a.m. UTC | #1
On Wed, 2016-08-31 at 11:56 +0800, Feng Wu wrote:
> We should remove the vCPU from the per-cpu blocking list
> if it is going to be destroyed.
> 
> Signed-off-by: Feng Wu <feng.wu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vmx.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index b869728..37fa2f1 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -346,6 +346,7 @@ static void vmx_vcpu_destroy(struct vcpu *v)
>      vmx_destroy_vmcs(v);
>      vpmu_destroy(v);
>      passive_domain_destroy(v);
> +    vmx_pi_blocking_cleanup(v);
>
I'm not too much into VMX, so I may be wrong (in which case, sorry),
but is it safe to call this after vmx_destroy_vmcs() ?

Also (even if it is), we're basically calling and executing the
following (called by vmx_pi_blocking_clanup()):

static void vmx_pi_remove_vcpu_from_blocking_list(struct vcpu *v)
{
    unsigned long flags;
    spinlock_t *pi_blocking_list_lock;
    struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;

    /*
     * Set 'NV' field back to posted_intr_vector, so the
     * Posted-Interrupts can be delivered to the vCPU when
     * it is running in non-root mode.
     */
    write_atomic(&pi_desc->nv, posted_intr_vector);

    /* The vCPU is not on any blocking list. */
    pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;

    /* Prevent the compiler from eliminating the local variable.*/
    smp_rmb();

    if ( pi_blocking_list_lock == NULL )
        return;

    spin_lock_irqsave(pi_blocking_list_lock, flags);

    /*
     * v->arch.hvm_vmx.pi_blocking.lock == NULL here means the vCPU
     * was removed from the blocking list while we are acquiring the lock.
     */
    if ( v->arch.hvm_vmx.pi_blocking.lock != NULL )
    {
        ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock);
        list_del(&v->arch.hvm_vmx.pi_blocking.list);
        v->arch.hvm_vmx.pi_blocking.lock = NULL;
    }

    spin_unlock_irqrestore(pi_blocking_list_lock, flags);
}

Considering that we're destroying, isn't this too much? Maybe it's not
a big deal, but I'd have expected that all is needed here is something
like:

 if ( v->arch.hvm_vmx.pi_blocking.lock )
 {
     spin_lock_irqsave(..);
     list_del(..);
     spin_unlock_irqrestore(..);
 }

Maybe the resume, list_remove, and cleanup functions need to be broken
up a bit more/better?

Also, as a side note (which I think would be more appropriate as a
comment to patch 1, but bear with me, I'm just back from vacations, I
have a lot of catch up to do, and I'm in hurry! :-P), now that the
function is called vmx_pi_remove_vcpu_from_blocking_list(), this
comment being part of its body sounds a bit weird:

    ...
    /* The vCPU is not on any blocking list. */
    pi_blocking_list_loc
k = v->arch.hvm_vmx.pi_blocking.lock;
    ...

I'd take the chance for rephrasing it.

Regards,
Dario
Wu, Feng Sept. 6, 2016, 11:27 p.m. UTC | #2
> -----Original Message-----

> From: Dario Faggioli [mailto:dario.faggioli@citrix.com]

> Sent: Tuesday, September 6, 2016 5:22 PM

> To: Wu, Feng <feng.wu@intel.com>; xen-devel@lists.xen.org

> Cc: Tian, Kevin <kevin.tian@intel.com>; george.dunlap@eu.citrix.com;

> andrew.cooper3@citrix.com; jbeulich@suse.com

> Subject: Re: [Xen-devel] [PATCH v3 3/6] VMX: Cleanup PI per-cpu blocking list

> when vcpu is destroyed

> 

> On Wed, 2016-08-31 at 11:56 +0800, Feng Wu wrote:

> > We should remove the vCPU from the per-cpu blocking list

> > if it is going to be destroyed.

> >

> > Signed-off-by: Feng Wu <feng.wu@intel.com>

> > ---

> >  xen/arch/x86/hvm/vmx/vmx.c | 1 +

> >  1 file changed, 1 insertion(+)

> >

> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c

> > index b869728..37fa2f1 100644

> > --- a/xen/arch/x86/hvm/vmx/vmx.c

> > +++ b/xen/arch/x86/hvm/vmx/vmx.c

> > @@ -346,6 +346,7 @@ static void vmx_vcpu_destroy(struct vcpu *v)

> >      vmx_destroy_vmcs(v);

> >      vpmu_destroy(v);

> >      passive_domain_destroy(v);

> > +    vmx_pi_blocking_cleanup(v);

> >

> I'm not too much into VMX, so I may be wrong (in which case, sorry),

> but is it safe to call this after vmx_destroy_vmcs() ?


It is safe, since vmx_pi_blocking_cleanup(v) doesn't deal with any
real VMCS stuff, it just handle some the pi blocking queue. But I think
your doubt here is available, maybe it looks more reasonable to
move it before vmx_destory_vmcs().

> 

> Also (even if it is), we're basically calling and executing the

> following (called by vmx_pi_blocking_clanup()):

> 

> static void vmx_pi_remove_vcpu_from_blocking_list(struct vcpu *v)

> {

>     unsigned long flags;

>     spinlock_t *pi_blocking_list_lock;

>     struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;

> 

>     /*

>      * Set 'NV' field back to posted_intr_vector, so the

>      * Posted-Interrupts can be delivered to the vCPU when

>      * it is running in non-root mode.

>      */

>     write_atomic(&pi_desc->nv, posted_intr_vector);

> 

>     /* The vCPU is not on any blocking list. */

>     pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;

> 

>     /* Prevent the compiler from eliminating the local variable.*/

>     smp_rmb();

> 

>     if ( pi_blocking_list_lock == NULL )

>         return;

> 

>     spin_lock_irqsave(pi_blocking_list_lock, flags);

> 

>     /*

>      * v->arch.hvm_vmx.pi_blocking.lock == NULL here means the vCPU

>      * was removed from the blocking list while we are acquiring the lock.

>      */

>     if ( v->arch.hvm_vmx.pi_blocking.lock != NULL )

>     {

>         ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock);

>         list_del(&v->arch.hvm_vmx.pi_blocking.list);

>         v->arch.hvm_vmx.pi_blocking.lock = NULL;

>     }

> 

>     spin_unlock_irqrestore(pi_blocking_list_lock, flags);

> }

> 

> Considering that we're destroying, isn't this too much? Maybe it's not

> a big deal, but I'd have expected that all is needed here is something

> like:

> 

>  if ( v->arch.hvm_vmx.pi_blocking.lock )

>  {

>      spin_lock_irqsave(..);

>      list_del(..);

>      spin_unlock_irqrestore(..);

>  }

> 

> Maybe the resume, list_remove, and cleanup functions need to be broken

> up a bit more/better?


Actually, this function carefully handles some cases, since
' v->arch.hvm_vmx.pi_blocking.lock ' can be set to NULL in other places,
such as, in pi_wakeup_interrupt(), so even we are in the destroy path,
we still need to save it to a local variable and check its value, then delete
the vCPU from the list if needed. So if we really want to eliminate something
in this function for the destroy path, maybe we don't need to restore the NV
field. If that is the case, seems this is not a big deal :)

> 

> Also, as a side note (which I think would be more appropriate as a

> comment to patch 1, but bear with me, I'm just back from vacations, I

> have a lot of catch up to do, and I'm in hurry! :-P), now that the

> function is called vmx_pi_remove_vcpu_from_blocking_list(), this

> comment being part of its body sounds a bit weird:

> 

>     ...

>     /* The vCPU is not on any blocking list. */

>     pi_blocking_list_loc

> k = v->arch.hvm_vmx.pi_blocking.lock;

>     ...

> 

> I'd take the chance for rephrasing it.


Oh, yes, this needs some adjustment. Thanks for the comments!

Thanks,
Feng

> 

> Regards,

> Dario

> --

> <<This happens because I choose it to happen!>> (Raistlin Majere)

> -----------------------------------------------------------------

> Dario Faggioli, Ph.D, http://about.me/dario.faggioli

> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
diff mbox

Patch

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b869728..37fa2f1 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -346,6 +346,7 @@  static void vmx_vcpu_destroy(struct vcpu *v)
     vmx_destroy_vmcs(v);
     vpmu_destroy(v);
     passive_domain_destroy(v);
+    vmx_pi_blocking_cleanup(v);
 }
 
 static DEFINE_PER_CPU(struct vmx_msr_state, host_msr_state);