diff mbox series

[v2] kvm: potential NULL pointer dereference in kvm_vm_ioctl_create_vcpu()

Message ID 20250418134440.379-1-chenyufeng@iie.ac.cn (mailing list archive)
State New
Headers show
Series [v2] kvm: potential NULL pointer dereference in kvm_vm_ioctl_create_vcpu() | expand

Commit Message

Chen Yufeng April 18, 2025, 1:44 p.m. UTC
A patch similar to commit 5593473a1e6c ("KVM: avoid NULL pointer
 dereference in kvm_dirty_ring_push").

If kvm_get_vcpu_by_id() or xa_insert() failed, kvm_vm_ioctl_create_vcpu() 
will call kvm_dirty_ring_free(), freeing ring->dirty_gfns and setting it 
to NULL. Then, it calls kvm_arch_vcpu_destroy(), which may call 
kvm_dirty_ring_push() in specific call stack under the same conditions as 
previous commit said. Finally, kvm_dirty_ring_push() will use 
ring->dirty_gfns, leading to a NULL pointer dereference.

v2:
 - fixed in better way by just moving the position of kvm_dirty_ring_free

v1: https://lore.kernel.org/kvm/596ce9b2-aa00-4bc5-ae20-451f3176d904@redhat.com

Signed-off-by: Chen Yufeng <chenyufeng@iie.ac.cn>
---
 virt/kvm/kvm_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Paolo Bonzini April 18, 2025, 1:50 p.m. UTC | #1
On 4/18/25 15:44, Chen Yufeng wrote:
> A patch similar to commit 5593473a1e6c ("KVM: avoid NULL pointer
>   dereference in kvm_dirty_ring_push").
> 
> If kvm_get_vcpu_by_id() or xa_insert() failed, kvm_vm_ioctl_create_vcpu()
> will call kvm_dirty_ring_free(), freeing ring->dirty_gfns and setting it
> to NULL. Then, it calls kvm_arch_vcpu_destroy(), which may call
> kvm_dirty_ring_push() in specific call stack under the same conditions as
> previous commit said. Finally, kvm_dirty_ring_push() will use
> ring->dirty_gfns, leading to a NULL pointer dereference.

Actually I'm not convinced that this can happen; at least not in the 
same way as commit 5593473a1e6c, because KVM_RUN can only be invoked 
after the "point of no return" of create_vcpu_fd().

The patch is good anyway because it's cleaner to use the same order as 
in kvm_vcpu_destroy(), but perhaps you can also move 
kvm_dirty_ring_alloc() before kvm_arch_vcpu_create(), so that the order 
remains opposite for creation and destruction?

Thanks,

Paolo
Chen Yufeng April 19, 2025, 7:47 a.m. UTC | #2
> On 4/18/25 15:44, Chen Yufeng wrote:
> > A patch similar to commit 5593473a1e6c ("KVM: avoid NULL pointer
> >   dereference in kvm_dirty_ring_push").
> > 
> > If kvm_get_vcpu_by_id() or xa_insert() failed, kvm_vm_ioctl_create_vcpu()
> > will call kvm_dirty_ring_free(), freeing ring->dirty_gfns and setting it
> > to NULL. Then, it calls kvm_arch_vcpu_destroy(), which may call
> > kvm_dirty_ring_push() in specific call stack under the same conditions as
> > previous commit said. Finally, kvm_dirty_ring_push() will use
> > ring->dirty_gfns, leading to a NULL pointer dereference.
> 
> Actually I'm not convinced that this can happen; at least not in the 
> same way as commit 5593473a1e6c, because KVM_RUN can only be invoked 
> after the "point of no return" of create_vcpu_fd().
> 
> The patch is good anyway because it's cleaner to use the same order as 
> in kvm_vcpu_destroy(), but perhaps you can also move 
> kvm_dirty_ring_alloc() before kvm_arch_vcpu_create(), so that the order 
> remains opposite for creation and destruction?

As you pointed out, I'm also unsure whether this can actually be 
triggered.

I agree that the fix following the approach of commit 5593473a1e6c will 
work. However, I wonder if swapping the order of kvm_dirty_ring_alloc() 
and kvm_arch_vcpu_create() might introduce other potential issues?

> Thanks,
> 
> Paolo

--
Thanks, 

Chen Yufeng
diff mbox series

Patch

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e85b33a92624..089efc4d01c6 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4178,9 +4178,9 @@  static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
 	xa_erase(&kvm->vcpu_array, vcpu->vcpu_idx);
 unlock_vcpu_destroy:
 	mutex_unlock(&kvm->lock);
-	kvm_dirty_ring_free(&vcpu->dirty_ring);
 arch_vcpu_destroy:
 	kvm_arch_vcpu_destroy(vcpu);
+	kvm_dirty_ring_free(&vcpu->dirty_ring);
 vcpu_free_run_page:
 	free_page((unsigned long)vcpu->run);
 vcpu_free: