diff options
author | Ben Gardon <bgardon@google.com> | 2019-03-12 11:45:58 -0700 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2019-03-15 19:16:45 +0100 |
commit | 92da008fa21034c369cdb8ca2b629fe5c196826b (patch) | |
tree | 780a56475b588838e148a1c198b6a67b379abd46 /arch | |
parent | 71783e09b4874c845819b5658b968d8b5b899333 (diff) | |
download | linux-stable-92da008fa21034c369cdb8ca2b629fe5c196826b.tar.gz linux-stable-92da008fa21034c369cdb8ca2b629fe5c196826b.tar.bz2 linux-stable-92da008fa21034c369cdb8ca2b629fe5c196826b.zip |
Revert "KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()"
This reverts commit 71883a62fcd6c70639fa12cda733378b4d997409.
The above commit contains an optimization to kvm_zap_gfn_range which
uses gfn-limited TLB flushes, if enabled. If using these limited flushes,
kvm_zap_gfn_range passes lock_flush_tlb=false to slot_handle_level_range
which creates a race when the function unlocks to call cond_resched.
See an example of this race below:
CPU 0 CPU 1 CPU 3
// zap_direct_gfn_range
mmu_lock()
// *ptep == pte_1
*ptep = 0
if (lock_flush_tlb)
flush_tlbs()
mmu_unlock()
// In invalidate range
// MMU notifier
mmu_lock()
if (pte != 0)
*ptep = 0
flush = true
if (flush)
flush_remote_tlbs()
mmu_unlock()
return
// Host MM reallocates
// page previously
// backing guest memory.
// Guest accesses
// invalid page
// through pte_1
// in its TLB!!
Tested: Ran all kvm-unit-tests on a Intel Haswell machine with and
without this patch. The patch introduced no new failures.
Signed-off-by: Ben Gardon <bgardon@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kvm/mmu.c | 16 |
1 files changed, 3 insertions, 13 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 8d43b7c0f56f..4cda5ee48845 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5660,13 +5660,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) { struct kvm_memslots *slots; struct kvm_memory_slot *memslot; - bool flush_tlb = true; - bool flush = false; int i; - if (kvm_available_flush_tlb_with_range()) - flush_tlb = false; - spin_lock(&kvm->mmu_lock); for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { slots = __kvm_memslots(kvm, i); @@ -5678,17 +5673,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) if (start >= end) continue; - flush |= slot_handle_level_range(kvm, memslot, - kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL, - PT_MAX_HUGEPAGE_LEVEL, start, - end - 1, flush_tlb); + slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, + PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL, + start, end - 1, true); } } - if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, - gfn_end - gfn_start + 1); - spin_unlock(&kvm->mmu_lock); } |