diff options
author | Andre Przywara <andre.przywara@arm.com> | 2018-05-11 15:20:14 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2018-05-22 18:53:57 +0200 |
commit | 27ea98a4c50c7aff5cfa132c15d6c68764b493fa (patch) | |
tree | ad508b6aa88635d55643cefb80c20418c3faa594 /arch/arm/include | |
parent | b6f6d8bfe779294e58f424bff08eab9914570d9b (diff) | |
download | linux-stable-27ea98a4c50c7aff5cfa132c15d6c68764b493fa.tar.gz linux-stable-27ea98a4c50c7aff5cfa132c15d6c68764b493fa.tar.bz2 linux-stable-27ea98a4c50c7aff5cfa132c15d6c68764b493fa.zip |
KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock
commit bf308242ab98b5d1648c3663e753556bef9bec01 upstream.
kvm_read_guest() will eventually look up in kvm_memslots(), which requires
either to hold the kvm->slots_lock or to be inside a kvm->srcu critical
section.
In contrast to x86 and s390 we don't take the SRCU lock on every guest
exit, so we have to do it individually for each kvm_read_guest() call.
Provide a wrapper which does that and use that everywhere.
Note that ending the SRCU critical section before returning from the
kvm_read_guest() wrapper is safe, because the data has been *copied*, so
we don't need to rely on valid references to the memslot anymore.
Cc: Stable <stable@vger.kernel.org> # 4.8+
Reported-by: Jan Glauber <jan.glauber@caviumnetworks.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch/arm/include')
-rw-r--r-- | arch/arm/include/asm/kvm_mmu.h | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index eb46fc81a440..08cd720eae01 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -221,6 +221,22 @@ static inline unsigned int kvm_get_vmid_bits(void) return 8; } +/* + * We are not in the kvm->srcu critical section most of the time, so we take + * the SRCU read lock here. Since we copy the data from the user page, we + * can immediately drop the lock again. + */ +static inline int kvm_read_guest_lock(struct kvm *kvm, + gpa_t gpa, void *data, unsigned long len) +{ + int srcu_idx = srcu_read_lock(&kvm->srcu); + int ret = kvm_read_guest(kvm, gpa, data, len); + + srcu_read_unlock(&kvm->srcu, srcu_idx); + + return ret; +} + static inline void *kvm_get_hyp_vector(void) { return kvm_ksym_ref(__kvm_hyp_vector); |