summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kvm/vgic/vgic-its.c
diff options
context:
space:
mode:
authorMarc Zyngier <maz@kernel.org>2020-05-25 09:45:21 +0100
committerMarc Zyngier <maz@kernel.org>2020-07-05 17:26:15 +0100
commita47dee5513cd7b6d1e20dfecd458363f24a19cdc (patch)
treed5c0a98c71e7453a859d8d5b9ee20302db5707d5 /arch/arm64/kvm/vgic/vgic-its.c
parentb3a9e3b9622ae10064826dccb4f7a52bd88c7407 (diff)
downloadlinux-a47dee5513cd7b6d1e20dfecd458363f24a19cdc.tar.gz
linux-a47dee5513cd7b6d1e20dfecd458363f24a19cdc.tar.bz2
linux-a47dee5513cd7b6d1e20dfecd458363f24a19cdc.zip
KVM: arm64: Allow in-atomic injection of SPIs
On a system that uses SPIs to implement MSIs (as it would be the case on a GICv2 system exposing a GICv2m to its guests), we deny the possibility of injecting SPIs on the in-atomic fast-path. This results in a very large amount of context-switches (roughly equivalent to twice the interrupt rate) on the host, and suboptimal performance for the guest (as measured with a test workload involving a virtio interface backed by vhost-net). Given that GICv2 systems are usually on the low-end of the spectrum performance wise, they could do without the aggravation. We solved this for GICv3+ITS by having a translation cache. But SPIs do not need any extra infrastructure, and can be immediately injected in the virtual distributor as the locking is already heavy enough that we don't need to worry about anything. This halves the number of context switches for the same workload. Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
Diffstat (limited to 'arch/arm64/kvm/vgic/vgic-its.c')
-rw-r--r--arch/arm64/kvm/vgic/vgic-its.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
index c012a52b19f5..40cbaca81333 100644
--- a/arch/arm64/kvm/vgic/vgic-its.c
+++ b/arch/arm64/kvm/vgic/vgic-its.c
@@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
db = (u64)msi->address_hi << 32 | msi->address_lo;
irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
-
if (!irq)
- return -1;
+ return -EWOULDBLOCK;
raw_spin_lock_irqsave(&irq->irq_lock, flags);
irq->pending_latch = true;