diff options
author | Wanpeng Li <wanpengli@tencent.com> | 2020-04-28 14:23:27 +0800 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2020-05-15 12:26:20 -0400 |
commit | 379a3c8ee44440d5afa505230ed8cb5b0d0e314b (patch) | |
tree | 85d3010294ab19df707ea05be9b9863c40e14f02 | |
parent | 404d5d7bff0d419fe11c7eaebca9ec8f25258f95 (diff) | |
download | linux-379a3c8ee44440d5afa505230ed8cb5b0d0e314b.tar.gz linux-379a3c8ee44440d5afa505230ed8cb5b0d0e314b.tar.bz2 linux-379a3c8ee44440d5afa505230ed8cb5b0d0e314b.zip |
KVM: VMX: Optimize posted-interrupt delivery for timer fastpath
While optimizing posted-interrupt delivery especially for the timer
fastpath scenario, I measured kvm_x86_ops.deliver_posted_interrupt()
to introduce substantial latency because the processor has to perform
all vmentry tasks, ack the posted interrupt notification vector,
read the posted-interrupt descriptor etc.
This is not only slow, it is also unnecessary when delivering an
interrupt to the current CPU (as is the case for the LAPIC timer) because
PIR->IRR and IRR->RVI synchronization is already performed on vmentry
Therefore skip kvm_vcpu_trigger_posted_interrupt in this case, and
instead do vmx_sync_pir_to_irr() on the EXIT_FASTPATH_REENTER_GUEST
fastpath as well.
Tested-by: Haiwei Li <lihaiwei@tencent.com>
Cc: Haiwei Li <lihaiwei@tencent.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Message-Id: <1588055009-12677-6-git-send-email-wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-rw-r--r-- | arch/x86/kvm/vmx/vmx.c | 5 | ||||
-rw-r--r-- | virt/kvm/kvm_main.c | 1 |
2 files changed, 5 insertions, 1 deletions
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c7730d3aa706..8d881fcf648e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3936,7 +3936,8 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector) if (pi_test_and_set_on(&vmx->pi_desc)) return 0; - if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false)) + if (vcpu != kvm_get_running_vcpu() && + !kvm_vcpu_trigger_posted_interrupt(vcpu, false)) kvm_vcpu_kick(vcpu); return 0; @@ -6812,6 +6813,8 @@ reenter_guest: * but it would incur the cost of a retpoline for now. * Revisit once static calls are available. */ + if (vcpu->arch.apicv_active) + vmx_sync_pir_to_irr(vcpu); goto reenter_guest; } exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index bef3d8d40685..11844fad60fd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4646,6 +4646,7 @@ struct kvm_vcpu *kvm_get_running_vcpu(void) return vcpu; } +EXPORT_SYMBOL_GPL(kvm_get_running_vcpu); /** * kvm_get_running_vcpus - get the per-CPU array of currently running vcpus. |