summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kvm
diff options
context:
space:
mode:
authorYanan Wang <wangyanan55@huawei.com>2020-12-02 04:10:33 +0800
committerMarc Zyngier <maz@kernel.org>2020-12-02 09:42:36 +0000
commit3a0b870e3448302ca2ba703bea1b79b61c3f33c6 (patch)
tree3820f50b0ad6b3d780ac5ea45d6ae09b614f32ef /arch/arm64/kvm
parent5c646b7e1d8bcb12317426287c516dfa4c5171c2 (diff)
downloadlinux-3a0b870e3448302ca2ba703bea1b79b61c3f33c6.tar.gz
linux-3a0b870e3448302ca2ba703bea1b79b61c3f33c6.tar.bz2
linux-3a0b870e3448302ca2ba703bea1b79b61c3f33c6.zip
KVM: arm64: Fix handling of merging tables into a block entry
When dirty logging is enabled, we collapse block entries into tables as necessary. If dirty logging gets canceled, we can end-up merging tables back into block entries. When this happens, we must not only free the non-huge page-table pages but also invalidate all the TLB entries that can potentially cover the block. Otherwise, we end-up with multiple possible translations for the same physical page, which can legitimately result in a TLB conflict. To address this, replease the bogus invalidation by IPA with a full VM invalidation. Although this is pretty heavy handed, it happens very infrequently and saves a bunch of invalidations by IPA. Signed-off-by: Yanan Wang <wangyanan55@huawei.com> [maz: fixup commit message] Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201201201034.116760-3-wangyanan55@huawei.com
Diffstat (limited to 'arch/arm64/kvm')
-rw-r--r--arch/arm64/kvm/hyp/pgtable.c8
1 files changed, 7 insertions, 1 deletions
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 2beba1dc40ec..bdf8e55ed308 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -502,7 +502,13 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
return 0;
kvm_set_invalid_pte(ptep);
- kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, 0);
+
+ /*
+ * Invalidate the whole stage-2, as we may have numerous leaf
+ * entries below us which would otherwise need invalidating
+ * individually.
+ */
+ kvm_call_hyp(__kvm_tlb_flush_vmid, data->mmu);
data->anchor = ptep;
return 0;
}