diff mbox series

[v2,3/4] KVM: Ensure lockdep knows about kvm->lock vs. vcpu->mutex ordering rule

Message ID 20230111180651.14394-3-dwmw2@infradead.org (mailing list archive)
State New, archived
Headers show
Series [v2,1/4] KVM: x86/xen: Fix lockdep warning on "recursive" gpc locking | expand

Commit Message

David Woodhouse Jan. 11, 2023, 6:06 p.m. UTC
From: David Woodhouse <dwmw@amazon.co.uk>

Documentation/virt/kvm/locking.rst tells us that kvm->lock is taken outside
vcpu->mutex. But that doesn't actually happen very often; it's only in
some esoteric cases like migration with AMD SEV. This means that lockdep
usually doesn't notice, and doesn't do its job of keeping us honest.

Ensure that lockdep *always* knows about the ordering of these two locks,
by briefly taking vcpu->mutex in kvm_vm_ioctl_create_vcpu() while kvm->lock
is held.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 virt/kvm/kvm_main.c | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

Paul Durrant Jan. 12, 2023, 4:27 p.m. UTC | #1
On 11/01/2023 18:06, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Documentation/virt/kvm/locking.rst tells us that kvm->lock is taken outside
> vcpu->mutex. But that doesn't actually happen very often; it's only in
> some esoteric cases like migration with AMD SEV. This means that lockdep
> usually doesn't notice, and doesn't do its job of keeping us honest.
> 
> Ensure that lockdep *always* knows about the ordering of these two locks,
> by briefly taking vcpu->mutex in kvm_vm_ioctl_create_vcpu() while kvm->lock
> is held.
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   virt/kvm/kvm_main.c | 7 +++++++
>   1 file changed, 7 insertions(+)

Reviewed-by: Paul Durrant <paul@xen.org>
diff mbox series

Patch

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 07bf29450521..5814037148bd 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3924,6 +3924,13 @@  static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
 	}
 
 	mutex_lock(&kvm->lock);
+
+#ifdef CONFIG_LOCKDEP
+	/* Ensure that lockdep knows vcpu->mutex is taken *inside* kvm->lock */
+	mutex_lock(&vcpu->mutex);
+	mutex_unlock(&vcpu->mutex);
+#endif
+
 	if (kvm_get_vcpu_by_id(kvm, id)) {
 		r = -EEXIST;
 		goto unlock_vcpu_destroy;