diff mbox series

[v1,1/2] KVM: arm64: Acquire mp_state_lock in kvm_arch_vcpu_ioctl_vcpu_init()

Message ID 20230419021852.2981107-2-reijiw@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Fix bugs related to mp_state updates | expand

Commit Message

Reiji Watanabe April 19, 2023, 2:18 a.m. UTC
kvm_arch_vcpu_ioctl_vcpu_init() doesn't acquire mp_state_lock
when setting the mp_state to KVM_MP_STATE_RUNNABLE. Fix the
code to acquire the lock.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/arm.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Marc Zyngier April 19, 2023, 7:12 a.m. UTC | #1
On Wed, 19 Apr 2023 03:18:51 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> kvm_arch_vcpu_ioctl_vcpu_init() doesn't acquire mp_state_lock
> when setting the mp_state to KVM_MP_STATE_RUNNABLE. Fix the
> code to acquire the lock.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/kvm/arm.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index fbafcbbcc463..388aa4f18f21 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1244,8 +1244,11 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
>  	 */
>  	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
>  		kvm_arm_vcpu_power_off(vcpu);
> -	else
> +	else {
> +		spin_lock(&vcpu->arch.mp_state_lock);
>  		WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
> +		spin_unlock(&vcpu->arch.mp_state_lock);
> +	}
>  
>  	return 0;
>  }

I'm not entirely convinced that this fixes anything. What does the
lock hazard against given that the write is atomic? But maybe a
slightly more readable of this would be to expand the critical section
this way:

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 4ec888fdd4f7..bb21d0c25de7 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1246,11 +1246,15 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	/*
 	 * Handle the "start in power-off" case.
 	 */
+	spin_lock(&vcpu->arch.mp_state_lock);
+
 	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
-		kvm_arm_vcpu_power_off(vcpu);
+		__kvm_arm_vcpu_power_off(vcpu);
 	else
 		WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
 
+	spin_unlock(&vcpu->arch.mp_state_lock);
+
 	return 0;
 }

Thoughts?

	M.
Reiji Watanabe April 20, 2023, 2:13 a.m. UTC | #2
Hi Marc,

On Wed, Apr 19, 2023 at 08:12:45AM +0100, Marc Zyngier wrote:
> On Wed, 19 Apr 2023 03:18:51 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> > kvm_arch_vcpu_ioctl_vcpu_init() doesn't acquire mp_state_lock
> > when setting the mp_state to KVM_MP_STATE_RUNNABLE. Fix the
> > code to acquire the lock.
> > 
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/kvm/arm.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index fbafcbbcc463..388aa4f18f21 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1244,8 +1244,11 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> >  	 */
> >  	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> >  		kvm_arm_vcpu_power_off(vcpu);
> > -	else
> > +	else {
> > +		spin_lock(&vcpu->arch.mp_state_lock);
> >  		WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
> > +		spin_unlock(&vcpu->arch.mp_state_lock);
> > +	}
> >  
> >  	return 0;
> >  }
> 
> I'm not entirely convinced that this fixes anything. What does the
> lock hazard against given that the write is atomic? But maybe a

It appears that kvm_psci_vcpu_on() expects the vCPU's mp_state
to not be changed by holding the lock.  Although I don't think this
code practically causes any real issues now, I am a little concerned
about leaving one instance that updates mpstate without acquiring the
lock, in terms of future maintenance, as holding the lock won't prevent
mp_state from being updated.

What do you think ?

> slightly more readable of this would be to expand the critical section
> this way:
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 4ec888fdd4f7..bb21d0c25de7 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1246,11 +1246,15 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
>  	/*
>  	 * Handle the "start in power-off" case.
>  	 */
> +	spin_lock(&vcpu->arch.mp_state_lock);
> +
>  	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> -		kvm_arm_vcpu_power_off(vcpu);
> +		__kvm_arm_vcpu_power_off(vcpu);
>  	else
>  		WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
>  
> +	spin_unlock(&vcpu->arch.mp_state_lock);
> +
>  	return 0;
>  }
> 
> Thoughts?

Yes, it looks better!

Thank you,
Reiji
Marc Zyngier April 20, 2023, 8:16 a.m. UTC | #3
On Thu, 20 Apr 2023 03:13:02 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Wed, Apr 19, 2023 at 08:12:45AM +0100, Marc Zyngier wrote:
> > On Wed, 19 Apr 2023 03:18:51 +0100,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > > kvm_arch_vcpu_ioctl_vcpu_init() doesn't acquire mp_state_lock
> > > when setting the mp_state to KVM_MP_STATE_RUNNABLE. Fix the
> > > code to acquire the lock.
> > > 
> > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > ---
> > >  arch/arm64/kvm/arm.c | 5 ++++-
> > >  1 file changed, 4 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index fbafcbbcc463..388aa4f18f21 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -1244,8 +1244,11 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> > >  	 */
> > >  	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> > >  		kvm_arm_vcpu_power_off(vcpu);
> > > -	else
> > > +	else {
> > > +		spin_lock(&vcpu->arch.mp_state_lock);
> > >  		WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
> > > +		spin_unlock(&vcpu->arch.mp_state_lock);
> > > +	}
> > >  
> > >  	return 0;
> > >  }
> > 
> > I'm not entirely convinced that this fixes anything. What does the
> > lock hazard against given that the write is atomic? But maybe a
> 
> It appears that kvm_psci_vcpu_on() expects the vCPU's mp_state
> to not be changed by holding the lock.  Although I don't think this
> code practically causes any real issues now, I am a little concerned
> about leaving one instance that updates mpstate without acquiring the
> lock, in terms of future maintenance, as holding the lock won't prevent
> mp_state from being updated.
> 
> What do you think ?

Right, fair enough. It is probably better to take the lock and not
have to think of this sort of things... I'm becoming more lazy by the
minute!

> 
> > slightly more readable of this would be to expand the critical section
> > this way:
> > 
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 4ec888fdd4f7..bb21d0c25de7 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1246,11 +1246,15 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> >  	/*
> >  	 * Handle the "start in power-off" case.
> >  	 */
> > +	spin_lock(&vcpu->arch.mp_state_lock);
> > +
> >  	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> > -		kvm_arm_vcpu_power_off(vcpu);
> > +		__kvm_arm_vcpu_power_off(vcpu);
> >  	else
> >  		WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
> >  
> > +	spin_unlock(&vcpu->arch.mp_state_lock);
> > +
> >  	return 0;
> >  }
> > 
> > Thoughts?
> 
> Yes, it looks better!

Cool. I've applied this change to your patch, applied the series to
the lock inversion branch, and remerged the branch in -next.

We're getting there! ;-)

	M.
Reiji Watanabe April 21, 2023, 3:27 a.m. UTC | #4
On Thu, Apr 20, 2023 at 1:16 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Thu, 20 Apr 2023 03:13:02 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Wed, Apr 19, 2023 at 08:12:45AM +0100, Marc Zyngier wrote:
> > > On Wed, 19 Apr 2023 03:18:51 +0100,
> > > Reiji Watanabe <reijiw@google.com> wrote:
> > > > kvm_arch_vcpu_ioctl_vcpu_init() doesn't acquire mp_state_lock
> > > > when setting the mp_state to KVM_MP_STATE_RUNNABLE. Fix the
> > > > code to acquire the lock.
> > > >
> > > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > > ---
> > > >  arch/arm64/kvm/arm.c | 5 ++++-
> > > >  1 file changed, 4 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > > index fbafcbbcc463..388aa4f18f21 100644
> > > > --- a/arch/arm64/kvm/arm.c
> > > > +++ b/arch/arm64/kvm/arm.c
> > > > @@ -1244,8 +1244,11 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> > > >    */
> > > >   if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> > > >           kvm_arm_vcpu_power_off(vcpu);
> > > > - else
> > > > + else {
> > > > +         spin_lock(&vcpu->arch.mp_state_lock);
> > > >           WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
> > > > +         spin_unlock(&vcpu->arch.mp_state_lock);
> > > > + }
> > > >
> > > >   return 0;
> > > >  }
> > >
> > > I'm not entirely convinced that this fixes anything. What does the
> > > lock hazard against given that the write is atomic? But maybe a
> >
> > It appears that kvm_psci_vcpu_on() expects the vCPU's mp_state
> > to not be changed by holding the lock.  Although I don't think this
> > code practically causes any real issues now, I am a little concerned
> > about leaving one instance that updates mpstate without acquiring the
> > lock, in terms of future maintenance, as holding the lock won't prevent
> > mp_state from being updated.
> >
> > What do you think ?
>
> Right, fair enough. It is probably better to take the lock and not
> have to think of this sort of things... I'm becoming more lazy by the
> minute!
>
> >
> > > slightly more readable of this would be to expand the critical section
> > > this way:
> > >
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index 4ec888fdd4f7..bb21d0c25de7 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -1246,11 +1246,15 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> > >     /*
> > >      * Handle the "start in power-off" case.
> > >      */
> > > +   spin_lock(&vcpu->arch.mp_state_lock);
> > > +
> > >     if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> > > -           kvm_arm_vcpu_power_off(vcpu);
> > > +           __kvm_arm_vcpu_power_off(vcpu);
> > >     else
> > >             WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
> > >
> > > +   spin_unlock(&vcpu->arch.mp_state_lock);
> > > +
> > >     return 0;
> > >  }
> > >
> > > Thoughts?
> >
> > Yes, it looks better!
>
> Cool. I've applied this change to your patch, applied the series to
> the lock inversion branch, and remerged the branch in -next.
>
> We're getting there! ;-)

Thank you, Marc!
Reiji
diff mbox series

Patch

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index fbafcbbcc463..388aa4f18f21 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1244,8 +1244,11 @@  static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	 */
 	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
 		kvm_arm_vcpu_power_off(vcpu);
-	else
+	else {
+		spin_lock(&vcpu->arch.mp_state_lock);
 		WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
+		spin_unlock(&vcpu->arch.mp_state_lock);
+	}
 
 	return 0;
 }