diff options
author | Sean Christopherson <sean.j.christopherson@intel.com> | 2019-02-05 13:01:33 -0800 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2019-02-20 22:48:46 +0100 |
commit | 5d6317ca4e61a3fa7528f832cd945c42fde8e67f (patch) | |
tree | 6bbcdb18a12decd98ce8aa9f38d0d9f4575ad2b0 /arch/x86 | |
parent | 8a674adc11cd4cc59e51eaea6f0cc4b3d5710411 (diff) | |
download | linux-5d6317ca4e61a3fa7528f832cd945c42fde8e67f.tar.gz linux-5d6317ca4e61a3fa7528f832cd945c42fde8e67f.tar.bz2 linux-5d6317ca4e61a3fa7528f832cd945c42fde8e67f.zip |
KVM: x86/mmu: Voluntarily reschedule as needed when zapping all sptes
Call cond_resched_lock() when zapping all sptes to reschedule if needed
or to release and reacquire mmu_lock in case of contention. There is no
need to flush or zap when temporarily dropping mmu_lock as zapping all
sptes is done only when the owning userspace VMM has exited or when the
VM is being destroyed, i.e. there is no interplay with memslots or MMIO
generations to worry about.
Be paranoid and restart the walk if mmu_lock is dropped to avoid any
potential issues with consuming a stale iterator. The overhead in doing
so is negligible as at worst there will be a few root shadow pages at
the head of the list, i.e. the iterator is essentially the head of the
list already.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86')
-rw-r--r-- | arch/x86/kvm/mmu.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c79ad7f31fdb..fa153d771f47 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5856,7 +5856,8 @@ restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { if (sp->role.invalid && sp->root_count) continue; - if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list)) + if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list) || + cond_resched_lock(&kvm->mmu_lock)) goto restart; } |