diff mbox series

[3/3] KVM: Ensure lockdep knows about kvm->lock vs. vcpu->mutex ordering rule

Message ID 83b174d505d47967f3c8762d231b69b6fc48d80a.camel@infradead.org (mailing list archive)
State New, archived
Headers show
Series KVM: x86/xen: Fix lockdep warning on "recursive" gpc locking | expand

Commit Message

David Woodhouse Jan. 11, 2023, 9:37 a.m. UTC
From: David Woodhouse <dwmw@amazon.co.uk>

Documentation/virt/kvm/locking.rst tells us that kvm->lock is taken outside
vcpu->mutex. But that doesn't actually happen very often; it's only in
some esoteric cases like migration with AMD SEV. This means that lockdep
usually doesn't notice, and doesn't do its job of keeping us honest.

Ensure that lockdep *always* knows about the ordering of these two locks,
by briefly taking vcpu->mutex in kvm_vm_ioctl_create_vcpu() while kvm->lock
is held.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 virt/kvm/kvm_main.c | 7 +++++++
 1 file changed, 7 insertions(+)
diff mbox series

Patch

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 07bf29450521..5814037148bd 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3924,6 +3924,13 @@  static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
        }
 
        mutex_lock(&kvm->lock);
+
+#ifdef CONFIG_LOCKDEP
+       /* Ensure that lockdep knows vcpu->mutex is taken *inside* kvm->lock */
+       mutex_lock(&vcpu->mutex);
+       mutex_unlock(&vcpu->mutex);
+#endif
+
        if (kvm_get_vcpu_by_id(kvm, id)) {
                r = -EEXIST;
                goto unlock_vcpu_destroy;