diff options
author | David Woodhouse <dwmw@amazon.co.uk> | 2023-01-11 18:06:50 +0000 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2023-01-11 13:32:21 -0500 |
commit | 42a90008f890afc41837dfeec1f0b1e7bcecf94a (patch) | |
tree | 52d9bef1911c441968650403c0a4519513e9781e | |
parent | bbe17c625d6843e9cdf14d81fbece1b0f0c3fb2f (diff) | |
download | linux-42a90008f890afc41837dfeec1f0b1e7bcecf94a.tar.gz linux-42a90008f890afc41837dfeec1f0b1e7bcecf94a.tar.bz2 linux-42a90008f890afc41837dfeec1f0b1e7bcecf94a.zip |
KVM: Ensure lockdep knows about kvm->lock vs. vcpu->mutex ordering rule
Documentation/virt/kvm/locking.rst tells us that kvm->lock is taken outside
vcpu->mutex. But that doesn't actually happen very often; it's only in
some esoteric cases like migration with AMD SEV. This means that lockdep
usually doesn't notice, and doesn't do its job of keeping us honest.
Ensure that lockdep *always* knows about the ordering of these two locks,
by briefly taking vcpu->mutex in kvm_vm_ioctl_create_vcpu() while kvm->lock
is held.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <20230111180651.14394-3-dwmw2@infradead.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-rw-r--r-- | virt/kvm/kvm_main.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 13e88297f999..9c60384b5ae0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3954,6 +3954,13 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) } mutex_lock(&kvm->lock); + +#ifdef CONFIG_LOCKDEP + /* Ensure that lockdep knows vcpu->mutex is taken *inside* kvm->lock */ + mutex_lock(&vcpu->mutex); + mutex_unlock(&vcpu->mutex); +#endif + if (kvm_get_vcpu_by_id(kvm, id)) { r = -EEXIST; goto unlock_vcpu_destroy; |