diff options
author | Zenghui Yu <yuzenghui@huawei.com> | 2019-10-29 15:19:19 +0800 |
---|---|---|
committer | Marc Zyngier <maz@kernel.org> | 2019-10-29 13:47:39 +0000 |
commit | ca185b260951d3b55108c0b95e188682d8a507b7 (patch) | |
tree | 478a1020713a0cf0bb46568973f6dea49c86f539 /virt | |
parent | bad36e4e8cdc9048948490293efefdbd85c40ecc (diff) | |
download | linux-stable-ca185b260951d3b55108c0b95e188682d8a507b7.tar.gz linux-stable-ca185b260951d3b55108c0b95e188682d8a507b7.tar.bz2 linux-stable-ca185b260951d3b55108c0b95e188682d8a507b7.zip |
KVM: arm/arm64: vgic: Don't rely on the wrong pending table
It's possible that two LPIs locate in the same "byte_offset" but target
two different vcpus, where their pending status are indicated by two
different pending tables. In such a scenario, using last_byte_offset
optimization will lead KVM relying on the wrong pending table entry.
Let us use last_ptr instead, which can be treated as a byte index into
a pending table and also, can be vcpu specific.
Fixes: 280771252c1b ("KVM: arm64: vgic-v3: KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES")
Cc: stable@vger.kernel.org
Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Eric Auger <eric.auger@redhat.com>
Link: https://lore.kernel.org/r/20191029071919.177-4-yuzenghui@huawei.com
Diffstat (limited to 'virt')
-rw-r--r-- | virt/kvm/arm/vgic/vgic-v3.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c index e69c538a24ca..f45635a6f0ec 100644 --- a/virt/kvm/arm/vgic/vgic-v3.c +++ b/virt/kvm/arm/vgic/vgic-v3.c @@ -363,8 +363,8 @@ retry: int vgic_v3_save_pending_tables(struct kvm *kvm) { struct vgic_dist *dist = &kvm->arch.vgic; - int last_byte_offset = -1; struct vgic_irq *irq; + gpa_t last_ptr = ~(gpa_t)0; int ret; u8 val; @@ -384,11 +384,11 @@ int vgic_v3_save_pending_tables(struct kvm *kvm) bit_nr = irq->intid % BITS_PER_BYTE; ptr = pendbase + byte_offset; - if (byte_offset != last_byte_offset) { + if (ptr != last_ptr) { ret = kvm_read_guest_lock(kvm, ptr, &val, 1); if (ret) return ret; - last_byte_offset = byte_offset; + last_ptr = ptr; } stored = val & (1U << bit_nr); |