diff options
author | Sean Christopherson <sean.j.christopherson@intel.com> | 2019-02-05 13:01:24 -0800 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2019-02-20 22:48:40 +0100 |
commit | 571c5af06e303b4a69016193fd6b5afbc96eac40 (patch) | |
tree | addd30f7a8b18519c9444a86ece7516c4affdccc /arch/x86/kvm/mmu.c | |
parent | 4771450c345dc5e3e3417d82aff62e0d88e7eee6 (diff) | |
download | linux-stable-571c5af06e303b4a69016193fd6b5afbc96eac40.tar.gz linux-stable-571c5af06e303b4a69016193fd6b5afbc96eac40.tar.bz2 linux-stable-571c5af06e303b4a69016193fd6b5afbc96eac40.zip |
KVM: x86/mmu: Voluntarily reschedule as needed when zapping MMIO sptes
Call cond_resched_lock() when zapping MMIO to reschedule if needed or to
release and reacquire mmu_lock in case of contention. There is no need
to flush or zap when temporarily dropping mmu_lock as zapping MMIO sptes
is done when holding the memslots lock and with the "update in-progress"
bit set in the memslots generation, which disables MMIO spte caching.
The walk does need to be restarted if mmu_lock is dropped as the active
pages list may be modified.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/mmu.c')
-rw-r--r-- | arch/x86/kvm/mmu.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d80c1558b23c..2190679eda39 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5954,7 +5954,8 @@ restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { if (!sp->mmio_cached) continue; - if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list)) + if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list) || + cond_resched_lock(&kvm->mmu_lock)) goto restart; } |