diff mbox series

[1/6] KVM: Explicitly verify target vCPU is online in kvm_get_vcpu()

Message ID 20241009150455.1057573-2-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: Fix bugs in vCPUs xarray usage | expand

Commit Message

Sean Christopherson Oct. 9, 2024, 3:04 p.m. UTC
Explicitly verify the target vCPU is fully online _prior_ to clamping the
index in kvm_get_vcpu().  If the index is "bad", the nospec clamping will
generate '0', i.e. KVM will return vCPU0 instead of NULL.

In practice, the bug is unlikely to cause problems, as it will only come
into play if userspace or the guest is buggy or misbehaving, e.g. KVM may
send interrupts to vCPU0 instead of dropping them on the floor.

However, returning vCPU0 when it shouldn't exist per online_vcpus is
problematic now that KVM uses an xarray for the vCPUs array, as KVM needs
to insert into the xarray before publishing the vCPU to userspace (see
commit c5b077549136 ("KVM: Convert the kvm->vcpus array to a xarray")),
i.e. before vCPU creation is guaranteed to succeed.

As a result, incorrectly providing access to vCPU0 will trigger a
use-after-free if vCPU0 is dereferenced and kvm_vm_ioctl_create_vcpu()
bails out of vCPU creation due to an error and frees vCPU0.  Commit
afb2acb2e3a3 ("KVM: Fix vcpu_array[0] races") papered over that issue, but
in doing so introduced an unsolvable teardown conundrum.  Preventing
accesses to vCPU0 before it's fully online will allow reverting commit
afb2acb2e3a3, without re-introducing the vcpu_array[0] UAF race.

Fixes: 1d487e9bf8ba ("KVM: fix spectrev1 gadgets")
Cc: stable@vger.kernel.org
Cc: Will Deacon <will@kernel.org>
Cc: Michal Luczaj <mhal@rbox.co>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 include/linux/kvm_host.h | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

Gupta, Pankaj Oct. 10, 2024, 5:31 a.m. UTC | #1
On 10/9/2024 5:04 PM, Sean Christopherson wrote:
> Explicitly verify the target vCPU is fully online _prior_ to clamping the
> index in kvm_get_vcpu().  If the index is "bad", the nospec clamping will
> generate '0', i.e. KVM will return vCPU0 instead of NULL.
> 
> In practice, the bug is unlikely to cause problems, as it will only come
> into play if userspace or the guest is buggy or misbehaving, e.g. KVM may
> send interrupts to vCPU0 instead of dropping them on the floor.
> 
> However, returning vCPU0 when it shouldn't exist per online_vcpus is
> problematic now that KVM uses an xarray for the vCPUs array, as KVM needs
> to insert into the xarray before publishing the vCPU to userspace (see
> commit c5b077549136 ("KVM: Convert the kvm->vcpus array to a xarray")),
> i.e. before vCPU creation is guaranteed to succeed.
> 
> As a result, incorrectly providing access to vCPU0 will trigger a
> use-after-free if vCPU0 is dereferenced and kvm_vm_ioctl_create_vcpu()
> bails out of vCPU creation due to an error and frees vCPU0.  Commit
> afb2acb2e3a3 ("KVM: Fix vcpu_array[0] races") papered over that issue, but
> in doing so introduced an unsolvable teardown conundrum.  Preventing
> accesses to vCPU0 before it's fully online will allow reverting commit
> afb2acb2e3a3, without re-introducing the vcpu_array[0] UAF race.

I think I have observed this (the cause, not the effect on teardown) 
accidentally when creating a vCPU for an overflowing vcpu_id.

> 
> Fixes: 1d487e9bf8ba ("KVM: fix spectrev1 gadgets")
> Cc: stable@vger.kernel.org
> Cc: Will Deacon <will@kernel.org>
> Cc: Michal Luczaj <mhal@rbox.co>
> Signed-off-by: Sean Christopherson <seanjc@google.com > ---
>   include/linux/kvm_host.h | 9 +++++++++
>   1 file changed, 9 insertions(+)
> 
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index db567d26f7b9..450dd0444a92 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -969,6 +969,15 @@ static inline struct kvm_io_bus *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx)
>   static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i)
>   {
>   	int num_vcpus = atomic_read(&kvm->online_vcpus);
> +
> +	/*
> +	 * Explicitly verify the target vCPU is online, as the anti-speculation
> +	 * logic only limits the CPU's ability to speculate, e.g. given a "bad"
> +	 * index, clamping the index to 0 would return vCPU0, not NULL.
> +	 */
> +	if (i >= num_vcpus)
> +		return NULL;

Would sev.c needs a NULL check for?

sev_migrate_from()
...
src_vcpu = kvm_get_vcpu(src_kvm, i);
src_svm = to_svm(src_vcpu);
...

Apart from the above comment, this looks good to me:

Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>


> +
>   	i = array_index_nospec(i, num_vcpus);
>   
>   	/* Pairs with smp_wmb() in kvm_vm_ioctl_create_vcpu.  */
Sean Christopherson Oct. 10, 2024, 3:54 p.m. UTC | #2
On Thu, Oct 10, 2024, Pankaj Gupta wrote:
> On 10/9/2024 5:04 PM, Sean Christopherson wrote:
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index db567d26f7b9..450dd0444a92 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -969,6 +969,15 @@ static inline struct kvm_io_bus *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx)
> >   static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i)
> >   {
> >   	int num_vcpus = atomic_read(&kvm->online_vcpus);
> > +
> > +	/*
> > +	 * Explicitly verify the target vCPU is online, as the anti-speculation
> > +	 * logic only limits the CPU's ability to speculate, e.g. given a "bad"
> > +	 * index, clamping the index to 0 would return vCPU0, not NULL.
> > +	 */
> > +	if (i >= num_vcpus)
> > +		return NULL;
> 
> Would sev.c needs a NULL check for?
> 
> sev_migrate_from()
> ...
> src_vcpu = kvm_get_vcpu(src_kvm, i);
> src_svm = to_svm(src_vcpu);
> ...

Nope, sev_check_source_vcpus() verifies the source and destination have the same
number of online vCPUs before calling sev_migrate_from(), and it's all done with
both VMs locked.

static int sev_check_source_vcpus(struct kvm *dst, struct kvm *src)
{
	struct kvm_vcpu *src_vcpu;
	unsigned long i;

	if (!sev_es_guest(src))
		return 0;

	if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus))
		return -EINVAL;

	kvm_for_each_vcpu(i, src_vcpu, src) {
		if (!src_vcpu->arch.guest_state_protected)
			return -EINVAL;
	}

	return 0;
}
Gupta, Pankaj Oct. 10, 2024, 4:33 p.m. UTC | #3
On 10/10/2024 5:54 PM, Sean Christopherson wrote:
> On Thu, Oct 10, 2024, Pankaj Gupta wrote:
>> On 10/9/2024 5:04 PM, Sean Christopherson wrote:
>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>>> index db567d26f7b9..450dd0444a92 100644
>>> --- a/include/linux/kvm_host.h
>>> +++ b/include/linux/kvm_host.h
>>> @@ -969,6 +969,15 @@ static inline struct kvm_io_bus *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx)
>>>    static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i)
>>>    {
>>>    	int num_vcpus = atomic_read(&kvm->online_vcpus);
>>> +
>>> +	/*
>>> +	 * Explicitly verify the target vCPU is online, as the anti-speculation
>>> +	 * logic only limits the CPU's ability to speculate, e.g. given a "bad"
>>> +	 * index, clamping the index to 0 would return vCPU0, not NULL.
>>> +	 */
>>> +	if (i >= num_vcpus)
>>> +		return NULL;
>>
>> Would sev.c needs a NULL check for?
>>
>> sev_migrate_from()
>> ...
>> src_vcpu = kvm_get_vcpu(src_kvm, i);
>> src_svm = to_svm(src_vcpu);
>> ...
> 
> Nope, sev_check_source_vcpus() verifies the source and destination have the same
> number of online vCPUs before calling sev_migrate_from(), and it's all done with
> both VMs locked.
> 
> static int sev_check_source_vcpus(struct kvm *dst, struct kvm *src)
> {
> 	struct kvm_vcpu *src_vcpu;
> 	unsigned long i;
> 
> 	if (!sev_es_guest(src))
> 		return 0;
> 
> 	if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus))
> 		return -EINVAL;
> 
> 	kvm_for_each_vcpu(i, src_vcpu, src) {
> 		if (!src_vcpu->arch.guest_state_protected)
> 			return -EINVAL;
> 	}
> 
> 	return 0;
> }

Yes, right!

Thanks,
Pankaj
diff mbox series

Patch

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index db567d26f7b9..450dd0444a92 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -969,6 +969,15 @@  static inline struct kvm_io_bus *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx)
 static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i)
 {
 	int num_vcpus = atomic_read(&kvm->online_vcpus);
+
+	/*
+	 * Explicitly verify the target vCPU is online, as the anti-speculation
+	 * logic only limits the CPU's ability to speculate, e.g. given a "bad"
+	 * index, clamping the index to 0 would return vCPU0, not NULL.
+	 */
+	if (i >= num_vcpus)
+		return NULL;
+
 	i = array_index_nospec(i, num_vcpus);
 
 	/* Pairs with smp_wmb() in kvm_vm_ioctl_create_vcpu.  */