summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kvm
diff options
context:
space:
mode:
authorOliver Upton <oliver.upton@linux.dev>2024-08-22 07:17:09 +0000
committerOliver Upton <oliver.upton@linux.dev>2024-08-22 07:41:00 +0000
commit1d8c3c23a6bc1527e253b305b4b68c03d833b824 (patch)
tree1de599ebf4723b0f3236ec1815ba05a102e09d8e /arch/arm64/kvm
parentf616506754d34bcfdbfbc7508b562e5c98461e9a (diff)
downloadlinux-stable-1d8c3c23a6bc1527e253b305b4b68c03d833b824.tar.gz
linux-stable-1d8c3c23a6bc1527e253b305b4b68c03d833b824.tar.bz2
linux-stable-1d8c3c23a6bc1527e253b305b4b68c03d833b824.zip
KVM: arm64: Ensure canonical IPA is hugepage-aligned when handling fault
Zenghui reports that VMs backed by hugetlb pages are no longer booting after commit fd276e71d1e7 ("KVM: arm64: nv: Handle shadow stage 2 page faults"). Support for shadow stage-2 MMUs introduced the concept of a fault IPA and canonical IPA to stage-2 fault handling. These are identical in the non-nested case, as the hardware stage-2 context is always that of the canonical IPA space. Both addresses need to be hugepage-aligned when preparing to install a hugepage mapping to ensure that KVM uses the correct GFN->PFN translation and installs that at the correct IPA for the current stage-2. And now I'm feeling thirsty after all this talk of IPAs... Fixes: fd276e71d1e7 ("KVM: arm64: nv: Handle shadow stage 2 page faults") Reported-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240822071710.2291690-1-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Diffstat (limited to 'arch/arm64/kvm')
-rw-r--r--arch/arm64/kvm/mmu.c9
1 files changed, 8 insertions, 1 deletions
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 6981b1bc0946..a509b63bd4dd 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1540,8 +1540,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
vma_pagesize = min(vma_pagesize, (long)max_map_size);
}
- if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
+ /*
+ * Both the canonical IPA and fault IPA must be hugepage-aligned to
+ * ensure we find the right PFN and lay down the mapping in the right
+ * place.
+ */
+ if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) {
fault_ipa &= ~(vma_pagesize - 1);
+ ipa &= ~(vma_pagesize - 1);
+ }
gfn = ipa >> PAGE_SHIFT;
mte_allowed = kvm_vma_mte_allowed(vma);